Dataset Viewer
bibkey
stringlengths 18
52
| title
stringlengths 31
151
⌀ | inclusion
stringclasses 1
value | exclusion_criteria
nullclasses 1
value | exclusion_criteria_detail
nullclasses 2
values | short_summary
stringlengths 48
766
⌀ | contribution
nullclasses 88
values | phenomenon_short
stringclasses 6
values | target_phenomenon
stringlengths 3
360
⌀ | phenomenon_defined
stringclasses 2
values | phenomenon_definition
stringlengths 10
964
⌀ | definition_scope
stringclasses 2
values | purpose_extra
stringclasses 81
values | task_definition
stringlengths 14
1.39k
⌀ | task_item_definition
stringlengths 7
3.27k
⌀ | task_definition_detail
stringlengths 1
1.19k
⌀ | task_source
stringlengths 14
460
⌀ | task_dataset_size
stringlengths 2
309
⌀ | task_dataset_metadata
stringclasses 2
values | dataset_metadata_detail
stringlengths 1
570
⌀ | dataset_sampling_method
stringclasses 18
values | response_format
stringclasses 52
values | metric_definition
stringlengths 3
419
| metric_definition_detail
nulllengths 21
1.18k
⌀ | task_source_detail
stringlengths 6
829
⌀ | authorship
stringclasses 7
values | benchmark_availability
stringclasses 18
values | procedural_extra
nullclasses 45
values | notes_extra
stringclasses 40
values | task_train_val
stringclasses 6
values | task_dataset_size_extra
stringlengths 2
549
⌀ | response_format_detail
stringclasses 88
values | metric_aggregation
stringclasses 26
values | metric_subscores
stringclasses 2
values | metric_subscores_detail
stringlengths 6
1.07k
⌀ | metric_metascoring
nullclasses 17
values | benchmark_location
stringlengths 6
117
⌀ | benchmark
stringlengths 3
146
⌀ | phenomenon_contested
stringclasses 3
values | task_face_validity
stringclasses 21
values | metric_face_validity
stringclasses 18
values | result_interpretation
stringclasses 2
values | results_comparison
stringclasses 2
values | results_comparison_explanation
stringclasses 3
values | results_realism
stringclasses 7
values | results_human_baseline
stringclasses 2
values | results_author_validity
stringclasses 15
values | results_author_validity_detail
stringlengths 17
1.19k
⌀ | metric_statistics
stringlengths 4
405
⌀ | metric_access
stringclasses 2
values | task_ecology
stringclasses 17
values | task_ecology_detail
stringlengths 5
580
⌀ | definition_integrity
stringclasses 3
values | definition_integrity_detail
stringclasses 3
values | task_dataset_size_detail
nullclasses 64
values | metric_fewshot
nullclasses 2
values | phenomenon_taxonomy_root
stringclasses 30
values | phenomenon_taxonomy_leaf
stringclasses 32
values | phenomenon_taxonomy_alternate
stringclasses 8
values | task_source_clean
stringlengths 11
119
| dataset_sampling_method_clean
stringclasses 18
values | response_format_clean
stringclasses 29
values | metric_definition_clean
stringclasses 77
values | phenomenon_contested_clean
stringclasses 3
values | task_face_validity_clean
stringclasses 5
values | metric_face_validity_clean
stringclasses 4
values | results_realism_clean
stringclasses 5
values | results_author_validity_clean
stringclasses 4
values | task_ecology_clean
stringclasses 14
values | metric_statistics_clean
stringclasses 10
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
mundlerSWTBenchTestingValidating2024
|
SWT-Bench: Testing and Validating Real-World Bug-Fixes with Code Agents
|
Include
| null | null |
A benchmark for generating code tests (unit tests) from natural language user GitHub issues.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Automatic code test generation (i.e. generating unit tests for issues)
|
Yes
|
The ability to generate valid tests to reproduce an issue in a codebase.
|
Comprehensive
| null |
Given a GitHub issue in natural language, you must write tests to reproduces the described issue.
|
A GitHub issue (taken from SWE-Bench), code that contains the issue and code with a 'golden patch' that has the issue fixed. The goal is to write unit tests that fail on the faulty code but pass after the patch is added.
|
Very comprehensive details about task definition.
|
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
1900
|
Yes
|
Length of the GitHub issue in tokens, original GitHub repository
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Structured response (e.g. valid JSON, API call alone)
|
Whether the faulty code fails on the test and the gold-standard code passes it.
| null |
SWE-bench, which originates from real GitHub issues
|
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Description length in tokens, original GitHub repository
| null |
https://github.com/logic-star-ai/SWT-Bench
|
SWT-Bench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
Limitations in how the phenomenon was operationalised - all problems are in Python.
|
simple mean
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Agents
|
Coding
| null |
['Real task', 'Another benchmark']
|
['Criterion']
|
['Structured']
|
['Reward']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Mean']
|
davidsonEvaluatingLanguageModel2024
|
EVALUATING LANGUAGE MODEL AGENCY THROUGH
NEGOTIATIONS
|
Include
| null | null |
The paper introduces a dynamic framework for evaluating LLMs using negotiation games in self-play and cross-play settings. They find that only closed-source models are able to successfully complete the task and that stronger LLMs don't always win over weaker opponents.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Alignment
|
Yes
|
Alignment metrics of interest are internal and external faithfulness as defined in Section 2.3, and the ability to follow instructions. [...] We measure instruction-following behavior of staying within the maximum number of words allowed to generate notes/messages (note/msg instruct) and the ability to correctly format internal offer indications using valid JSON (format instruct). [... (from 2.3)...]. . In natural language processing (NLP), faithfulness is a concept used to describe how accurately a model’s reasoning explains its answers/actions. To measure internal faithfulness, agents are asked to summarize acceptable offers for each Issue in their mental notes. [...] If Alice makes an offer to Bob for fewer slices than she stated as acceptable, we register this as an instance of internal unfaithfulness.
|
Subset
|
The paper is a bit unfocused in what it measures. The title says "Agency", the authors mainly note "Alignment" as motivation, and there is also a degree of "Negotiation skill" and "Theory of Mind".
|
The task is a series of negotiation games, where LLMs are given rules, a persona, protocols, and goals. Agents do both internal deliberation and external negotiation, and the game ends when a completion criteria is reached.
|
A single task is one round of a negotiation game that is either self-play or against another model.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
| null |
Yes
|
prompts, game settings, issues
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Extended interaction (e.g. conversation, calling an API and processing the response)
|
Exact Match (accuracy, F1, precision, recall), Number of rounds completted
| null |
The authors generate a list of Games, Issues. It seems these were crafted manually
|
Academia
|
Yes
| null |
This "benchmark" defines too many phenomena to fit neatly in the framework
|
Test
| null |
Negotiation
|
Simple Mean
|
Yes
|
Scores are reported for different types of games.
| null |
https://github.com/epfl-dlab/LAMEN/
| null |
Contested
|
Partially
|
Partially
|
Yes
|
No
|
No comparisons made
|
It is an entirely constructed scenario (no available realistic setting)
|
No
|
No
| null |
mean with variance
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
The tasks simulates agent negotiations (so no humans involved)
|
Composite phenomenon
|
Yes
| null | null |
Alignment
|
Alignment
| null |
['Author-crafted']
|
['Targeted']
|
['Interaction']
|
['Exact match', 'Reward']
|
['Contested']
|
['Partially']
|
['Partially']
|
['Not possible']
|
['No']
|
['Constructed']
|
['Mean', 'Std']
|
helweMAFALDABenchmarkComprehensive2024
|
MAFALDA: A Benchmark and Comprehensive Study of Fallacy Detection and Classification
|
Include
| null | null |
The paper introduces MAFALD, a benchmark that provides a unified classification of fallacies and provides a taxonomy. It features manually annotated data with explanations, a tailored annotation scheme, and an evaluation method for subjective NLP tasks. Various language models and human performance are evaluated on fallacy detection and classification in a zero-shot learning setting.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
fallacies in reasoning
|
Yes
|
A fallacy is an erroneous or invalid way of reasoning. A fallacy is an argument where the premises do not entail the conclusion. Sub-elements: Fallacy of credibility, fallacy of logic, appeal to emotion
|
Comprehensive
| null |
Given a text, detect fallacies and classify them
|
Level 0: binary classification (fallacy or not), Level 1: groups fallacies into Aristotle’s categories: ‘Pathos’ (appeals to emotion), ‘Ethos’ (fallacies of credibility), and ‘Logos’ (fallacies of logic, relevance, or evidence), Level 2 contains fine-grained fallacies within the
broad categories of Level 1. For instance, under fallacy of credibility, we have specific fallacies such as appeal to tradition, ad populum, and guilt by association.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
9735
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
3 levels (different granularity)
| null |
GitHub
|
MAFALDA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
simple mean/sum
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Logical
| null |
['Author-crafted', 'Crowd-sourced', 'Another benchmark']
|
['Convenience', 'Targeted']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean']
|
niuRAGTruthHallucinationCorpus2024
|
RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models
|
Include
| null | null |
This paper targets word-level hallucinations in various tasks and domains in the RAG setting. It presents approximately 18,000 responses generated using RAG from diverse LLMs which are annotated at the word level for hallucination intensity. Hallucination frequencies are benchmarked across various LLMs, and hallucination detection methods are assessed versus a small LLM fine-tuned using the proposed dataset, RAGTruth.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
hallucination detection, specifically for RAG applications
|
Yes
|
"Hallucination in the context of LLMs usually refers to a situation where the
model generates content that is not based on factual or accurate information"
|
Subset
| null |
For a given reference-response pair, determine if it contains hallucinated content at the response level and span level.
|
A single item consists of source information (reference), an LLM-generated response (which may contain various degrees of hallucination), annotation of the location and type of hallucination (if any), and a brief annotated explanation of the hallucination observed.
|
Additional meta-data regarding the model and inference hyperparameters used to generate each sample is provided, along with details regarding the source and task type for the reference texts.
|
Real task examples (e.g. GitHub issues), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
2700
|
Yes
|
source information index, generating model, temperature, whether quality issues are present in the sample, task type of the data, source of the original content, prompt used to generate the response, base content for RAG
|
Random sample (creators defined a task space and sampled from it), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
15090 (train)
| null |
Simple Mean
|
Yes
|
by task type (QA, summarization, data-to-text writing)
| null |
https://github.com/ParticleMedia/RAGTruth
|
RAGTruth
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
Benchmark statistics and quality checking are described. Hallucination density is assessed across models used to generate the data, in relation to context length, and versus position in the text.
| null |
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Composite phenomenon
|
Yes
| null | null |
Retrieval
| null |
Factuality
|
['Real task', 'Crowd-sourced', 'Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Random', 'Targeted']
|
['Short free response', 'Free response', 'Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Complete']
| null |
wangIELMOpenInformation2022
|
IELM: An Open Information Extraction Benchmark for Pre-Trained Language Models
|
Include
| null | null |
They introduce a new open information extraction (OIE) benchmark designed to evaluate the relational knowledge stored in pre-trained language models (LMs) such as BERT and GPT (published in 2022). Their method involves transforming these pre-trained LMs into zero-shot OIE systems to assess their performance on both existing and novel factual OIE datasets. Their results show that pre-trained LMs achieve competitive performance, even surpassing state-of-the-art supervised OIE methods on certain datasets without any additional training data.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
open information extraction i.e. answering “fill-in-the-blank” questions when given
a pre-defined relation category
|
Yes
|
"In this work, we set up a new open information extraction (OIE) benchmark, called IELM, towards testing the general and open relational information stored in pre-trained LMs."
|
Comprehensive
|
For definition_integrity - the paper looks at both standard OIE and factual OIE.
|
"In this work, we set up a new open information extraction (OIE) benchmark, called IELM, towards testing the general and open relational information stored in pre-trained LMs. We refer to OIE as it is a task that is designed to extract open relations from massive corpora without requiring a pre-defined relation category."
|
"For open information extraction (OIE), we take an input as a NP-chunked sentence and output a set of triples. Below is an example.
Input DylanNP was born in MinnesotaNP, and was awarded Nobel PrizeNP.
Output (Dylan; born in; Minnesota), (Dylan; awarded; Nobel Prize).
NP denotes the noun phrase."
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Based on knowledge graphs (KG) e.g. Wikidata
|
27,400,440 triples 6,096,709 arguments 5,418 predicates 9,925,937 documents
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
No, link is broken
| null | null |
Test
|
The dataset size above is summed over 4 datasets in Table 2.
|
Output is a set of triples
| null |
Yes
|
Metrics are reported for each OIE dataset (CaRB(existing), Re-OIE206 (existing), TAC KBP-OIE (novel), Wikidata-OIE (novel)).
| null |
https://github.com/cgraywang/IELM - This repository is empty.
|
IELM
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
They carry out an error analysis:
"We argue that we are measuring a lower bound for what LMs know. To further understand the shortcomings of the current method, we conduct an error analysis of the errors in precision on all datasets. We choose BERTLARGE for the study. We sample 100 documents from the Wikidata-OIE dataset, and manually check the reasons for the errors."
They find error from: incorrect arguments, missing pairs in predicate mapping, correct triples that are not covered by Wikidata, and incorrect predicate phrases.
|
The authors carry out some error analysis: "We argue that we are measuring a lower bound for what LMs know. To further understand the shortcomings of the current method, we conduct an error analysis of the errors in precision on all datasets. We choose BERTLARGE for the study. We sample 100 documents from the Wikidata-OIE dataset, and manually check the reasons for the errors"
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
No
| null | null |
NLP
|
Extraction
| null |
['Crowd-sourced', 'Procedurally-generated']
|
['Convenience']
|
['Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Other']
|
heTGEAErrorAnnotatedDataset2021
|
TGEA: An Error-Annotated Dataset and Benchmark Tasks for Text Generation from Pretrained Language Models
|
Include
| null | null |
TGEA (Text Generation Error Annotation) is an error-annotated dataset with multiple benchmark tasks for text generation. Following the authors hierachical error taxonomy, crowdsourced workers manually labeled 12k erroneous sentences with semantic information, including error types, associated text spans, error corrections and rationals behind errors.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Text generation error analysis
|
Yes
|
"The key interest of this dataset is detecting and annotating text generation errors from PLMs."
|
Subset
| null |
The task requires models to analyze machine-generated Chinese text to detect, locate, classify, correct, and explain generation errors according to a comprehensive taxonomy of error types.
|
A single item consists of machine-generated Chinese text with annotations marking error spans, associated spans, corrections, error type classifications, and explanatory rationales.
| null |
LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
47,058
|
Yes
|
error type classification, token counts, error span locations, span distances, error distribution
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), Distribution (perplexity, calibration, correlation)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
train (37,646), Dev (4,706), test (4,706)
| null |
None, Separate metrics for each sub-task with no single aggregated score
|
Yes
|
Erroneous text detection, Erroneous and associated span detection, Error type classification, Error correction, Rationale generation
| null |
https://download.mindspore.cn/dataset/TGEA/
|
TGEA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
The authors validate their benchmark with inter-annotator agreement statistics for different tasks, Cohen's Kappa coefficients, a rigorous quality control protocol, annotation verification on sampled texts, and human performance baselines.
|
Simple means for performance metrics; agreement percentages and Cohen's Kappa for annotation reliability.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Factuality
| null | null |
['LLM-generated']
|
['Targeted', 'Criterion']
|
['Short free response', 'Free response']
|
['Exact match', 'Soft match', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean', 'Other']
|
huangCEvalMultiLevelMultiDiscipline2023
|
C-EVAL: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models
|
Include
| null | null |
The paper introduces C-EVAL evaluation suite for assessing advanced knowledge and reasoning abilities of foundation models in Chinese, It spans four difficulty levels and 52 disciplines. It also introduces C-EVAL HARD a subset of challenging subjects that require advanced reasoning.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Knowledge and reasoning in Mandarin Chinese and on questions situated in the Chinese context
|
No
| null |
Comprehensive
| null |
Multiple choice questions from real-world human exams in China at different difficultly levels (e.g., high school, college) and different disciplines (e.g., STEM, humanities).
|
An MCQ question with four possible answers.
| null |
Human exam questions (e.g. GRE questions)
|
12342
|
Yes
|
topic area (e.g., STEM, humanities) and difficultly level (e.g., middle school)
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
Dev: 260, Valid: 1346
| null |
Simple Mean
|
Yes
|
Subject/exam (and by extension difficulty)
| null |
https://github.com/hkust-nlp/ceval/tree/main
|
C-EVAL
|
Contested
|
They follow the lead of popular knowledge and reasoning benchmarks, so it's hard to say here.
|
Not sure about this. Compared to other similar benchmarks, yes. In general, probably not.
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Knowledge
|
Cultural
| null |
['Human exams']
|
['Convenience']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Partially']
|
['Partially']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean']
|
myungBLEnDBenchmarkLLMs2024
|
BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages
|
Include
| null | null |
The paper introduces BLEND, a novel benchmark comprising hand-crafted question-answer pairs designed to evaluate LLMs on everyday cultural knowledge across 16 countries/regions and 13 languages, including low-resource ones. It demonstrates significant performance disparities among models, showing cultural and linguistic biases, especially in underrepresented regions.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
cultural knowledge and multilingual cultural commonsense understanding
|
Yes
|
knowledge of everyday cultural practices that are specific to different countries and regions. This includes understanding what people commonly do, eat, or experience in their daily lives within a specific cultural and linguistic context. Specifically, dimensions such as food, sports, celebrations, education, family, and work-life are considered.
|
Subset
| null |
The task is to evaluate large language models on their ability to correctly answer short-answer and multiple-choice questions about everyday cultural practices from various countries and regions, using either local languages and English. Human evaluation is conducted on short-answer questions with annotators coming from the tested regions.
|
"Al-en-06": {
"question": "대한민국 학교 급식에서 흔히 볼 수 있는 음식은 무엇인가요?",
"en_question": "What is a common school cafeteria food in your country?",
"annotations": [
{
"answers": [
"김치"
],
"en_answers": [
"kimchi"
],
"count": 4
},
{
"answers": [
"밥",
"쌀밥",
"쌀"
],
"en_answers": [
"rice"
],
"count": 3
},
...
],
"idks": {
"idk": 0,
"no-answer": 0,
"not-applicable": 0,
"others": []
}
},
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Procedurally-generated task examples (e.g. Creating instances from a template)
|
52.6k
|
Yes
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
by language (native and English)/country (region)
| null |
https://github.com/nlee0212/BLEnD
|
BLEnD
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
for short-answer questions, there is a human evaluation, which to some extent can represent the validity of the questions
| null |
simple mean, Anova for p-values, Tukey-HSD
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Knowledge
|
Cultural
| null |
['Author-crafted', 'Crowd-sourced', 'Procedurally-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response']
|
['Exact match', 'LLM post-processing']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean', 'Tests']
|
yaoWebShopScalableRealWorld2022
|
WebShop: Towards Scalable Real-World Web
Interaction with Grounded Language Agents
|
Include
| null | null |
The paper introduces WebShop, a simulated online shopping environment where agents try to follow natural language instructions to find and buy the right products. WebShop benchmark is designed to test how well agents can search, navigate, and make decisions on the web. The authors train models using imitation and reinforcement learning, and show that the best ones can even handle similar tasks on real sites like Amazon and eBay.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Natural language understanding and sequential decision-making in web environments.
|
No
|
To evaluate agents that can understand human-provided natural language instructions and perform grounded actions in a realistic web environment, e.g generating search queries, navigating results, selecting options, and (at the end, if succesful) purchasing a product that matches the instruction.
|
Subset
| null |
The task is to follow a natural language instruction to find and purchase a product in a simulated ecommerce environment. Agent must search, navigate pages, select product options, and choose the best match based on the instruction.
|
Natural language instruction - specifying a desired product (including attributes, options, and price constraints), with the starting state of the simulated shopping environment. The agent must then complete the task by navigating and interacting with the website to find and purchase a suitable product.
| null |
Real task examples (e.g. GitHub issues), Crowd-sourced task examples (e.g. Prolific-created tasks)
|
500
|
Yes
|
product category, product attributes, product options, product price
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Free response (e.g. summary paragarph), Extended interaction (e.g. conversation, calling an API and processing the response)
|
reward is computed based on the final product chosen by the agent, compared against known attributes, options, and price of the target product.
| null | null |
Academia
|
Yes
| null |
Here the evaluation is fully automated, which allows for easier reproduction - which seems like a significant advantage compared to others.
| null |
“[...] a total of 12,087 instructions into an i.i.d. distributed train / development / test split of 10,587 / 1,000 / 500 instances"
| null |
Simple Mean
|
Yes
|
Paper reports breakdowns by reward components: attribute match score, option match score, price match, and type match.
| null |
https://webshop-pnlp.github.io/
|
WebShop
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
Yes
|
Yes
|
Yes
|
They discuss the performance gap between models and humans, quite detailed analysis of error types (e.g. failure in option matching or limited exploration), evidence of sim-to-real transfer to Amazon and eBay, aiming to indicate the external validity, as well as component-wise ablations and choice oracle (the model doesn't have to chose) experiments to diagnose bottlenecks
|
The authors report average task score and success rate across trials. They also include standard deviation/error bars in some result plots (e.g. Figure 4), mainly to show the variation across multiple runs.
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
WebShop simulates online shopping using real product data and realistic ux, but it operates in a custom environment with a simplified interface and deterministic search engine. So while the core interactions reflect a real-world activity, it doesn’t capture the full complexity or variability of actual web browsing with human properly in the loop or user's behaviour.
|
Composite phenomenon
|
No
| null | null |
Agents
|
Web
| null |
['Real task', 'Crowd-sourced']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Free response', 'Interaction']
|
['Reward']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['Yes']
|
['Partial']
|
['Mean', 'Std']
|
sanyalRobustLRDiagnosticBenchmark2022
|
ROBUSTLR: A Diagnostic Benchmark for Evaluating Logical Robustness of
Deductive Reasoners
|
Include
| null | null |
Deductive reasoning is an important skill that modern language models should possess. However, small logical perturbations of deductive reasoning problems can lead to inconsistent model responses. To test this consistency, the paper introduces RobustLR a benchmark consisting of logical problems ("theories") and variations thereof that should be consistenly answered correctly by models.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
robustness of deductive reasoning against small shifts in logical operators or rephrasing.
|
Yes
|
"We consider a deductive reasoner (language model) to be logically robust if the model behavior is consistent across various logical perturbations."
|
Comprehensive
|
Consistency here can be misinterpreted: The perturbations applied to problems cause different conclusions. Consistency is here defined as being accurate across perturbations, i.e. changing the label when the input changes. This is in contrast to many other works that regard consistency as invariance.
|
The task has 2 levels: The underlying task is conducting deductive reasoning. This is a classification problem: "True", "False" "Unknown". The "meta-task" is being consistent across a set of related problems.
|
One item in the benchmark is a set: "original problem" + a set of perturbations on the problem. Each problem is a set of facts, rules and deduction.
| null |
Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
No
| null |
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null |
The synthetic nature of the benchmark is very much limiting the ecological validity of the benchmark for real user interaction, but the authors are very clear and transparent about it. The lack of ecological validity is compensated by internal validity.
|
Test
| null |
yes
|
Simple Mean
|
Yes
|
different kinds of perturbations of the problem.
| null |
https://github.com/INK-USC/RobustLR
|
RobustLR
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
The authors clearly state limitations due to simple composition of rules used for perturbations and the synthetic toy nature of the dataset. They also validate that humans can achieve good scores on the problems while langauge models dont.
|
mean of weighted-F1 scores
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Logical
|
Robustness
|
['Procedurally-generated']
|
['Random', 'Convenience']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
albalakFETABenchmarkFewSample2022
|
FETA: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue
|
Include
| null | null |
Examines few-sample task transfer across 17 subtasks (e.g., utterance-level classification, dialogue-level classification, span extraction, multiple-choice) in open-domain dialogue with diverse properties (dyadic vs. multi-party, anonymized vs. recurring speaker, varying dialogue lengths).
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Task transfer, transferring knowledge contained in related tasks, in few-sample settings (10% of original instance set)
|
Yes
|
Task transfer, transferring knowledge contained in related tasks. Definition 3 (Task Transfer). Given a source task TS = {YS, fS(XS)} and target task TT =
{YT , fT (XT )}, task transfer is the use of a learning algorithm, A, to improve the learning of fT by using the knowledge in TS.
They also define Few-Sample: For this reason, we focus on the fewsample setting, defined in FETA as 10% of the original instance set. Out of 10%, 5%, and 1%,
10% was empirically determined to be the smallest percentage that retains labels from all label sets in both the train and development partitions.
|
Subset
|
They define seperately: (1) Cross-dataset task transfer, when XS ≠ XT ,
we also have P(XS) ≠ P(XT ) and DS ≠ DT ;
domain shift; vs (2) intra-dataset task transfer, when XS = XT , there is no domain shift.
|
The tasks are classic NLP tasks subsumed in dialog - e.g., emotional recognition during chit-chat conversations, or character identification from a TV transcript.
|
Input = a dialogue (from DailyDialog); Subtask = Emotion Recognition; Output = Happiness; OR Input = a transcript from a TV Show (from Friends); Subtask = QA, question = How long did Rachael train for?; Output = 2 weeks.
|
They focus on intra-dataset transfer but not cross-domain transfer.
|
Modified from another benchmark (e.g. translation into another language), Human TV show; Human chitchat dialogues
|
71,212
|
Yes
|
They provide the datasource (dialog, friends), the task name (e.g., emotion recognition, or QA), and the a categorisation of task type (e.g., utterance classification vs span extraction)
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Depends on the subtask category (Utterance Classification, Dialogue Classification, Multiple Choice, Span Extraction)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
Train=28,261, Dev = 5,132
| null |
Simple Mean
|
Yes
|
They provide results over the task categories - Utterance Classification, Dialogue Classification, Multiple Choice, Span Extraction
| null |
https://alon-albalak.github.io/feta-website/
|
FETA
|
Widely-agreed
|
Partially
|
Partially
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Mean, and they they show a delta (for change in aggregate sources across all tasks). It is unclear if this is a range or a standard deviation. I think it's a range.
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
Using the model for various tasks contained in dialogue seems a more general ecologically valid use case, than the Friends transcript understanding but this could also be an applied usecase.
|
Composite phenomenon
|
Yes
| null | null |
Language Modelling
|
Adaptability
| null |
['Another benchmark', 'Author-crafted']
|
['Convenience']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Partially']
|
['Partially']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
beanLINGOLYBenchmarkOlympiadLevel2024
|
LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low Resource and Extinct Languages
|
Include
| null | null |
The paper introduces LINGOLY, a new benchmark built on Linguistics Olympiad puzzles in low-resource and extinct languages to test genuine reasoning capabilities in LLMs. The benchmark is crafted covering diverse reasoning complexity, linguistic subject areas, instruction types, and high/low resources. The paper uncovers error pattenrs between high and low resource settings and show the ongoing challenges in multi-step, out-of-domain reasoning.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Multi-step, out-of-domain linguistic reasoning, low-resource languages,
|
Yes
|
We argue that a benchmark task measures reasoning if the task 1) cannot be
done without reasoning (necessity) and 2) can be done via reasoning (sufficiency). However, the combination of these features is difficult to achieve in practice since memorisation and contamination may reduce the necessity of reasoning, and in tasks which draw on background knowledge, as in most ‘commonsense’ benchmarks[7], reasoning itself is insufficient to complete the task.
|
Subset
|
No-context baseline -- evaluate if the model performance drops when the context is removed. This concept is to assess the performance if the model has relied on memorization or reasoning from the linguistic clues in the context.
|
The task is to understand genuine reasoning capabilities of LLMs by providing context of low-resource linguistic information and questions to solve based on the given context (or without context to penalize the memorized knowledge). The expected output is a concise textual answer that can be matched up with ground-truth labels.
|
Below is a problem sheet…
{PREAMBLE}
{CONTEXT}
{QUESTIONS}
{SUBQUESTIONS}
Now respond to the following…
{REPEAT 1 QUESTION}
Format your response as…
{FORMAT TEMPLATE}
|
Compare the model performance with and without contextual information to penalize the memorized knowledge and evaluate the genuine reasoning abilities of LLMs using the linguistic cues from the given knowledge.
|
Human exam questions (e.g. GRE questions), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
| null |
Yes
|
human difficulty, linguistic subjects, task format, language
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice, Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null |
The task from LINGOLY is adapted from official Linguistics Olympiads puzzle sets rather than everyday language usage scenarios or standard benchmarking corpora.
|
Academia
|
Yes
| null |
One critical point is whether language models provide poor performances due to the unfamiliar format or out-of-domain reasoning -- the mismatch between the puzzle's presentation style and the distribution of model instruction templates may cause certain reasoning failures depending on model types.
It would be nice to see how benchmarks have certain patterns with model types.
|
Test
|
1,133 questions all for testing.
|
Free response is existed but excluded from evaluation (The only case where an instance has a missing answer is when the intended answer was a free response, e.g., “explain your reasoning”. These questions are included in the dataset but removed from the scoring as they are not compatible with being machine-scored.)
|
Simple Mean
|
Yes
|
Human difficulty, puzzle format, linguistic subject, language resourcedness
| null |
The huggingface is working great while the Github zip file requires passcode to get access.
|
LINGOLY
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
Across models, performance is consistently higher on problems with easier human difficulty and higher resource languages than those of harder difficulty and lower-resource languages.
(LLMs tested have limited reasoning abilities about low-resource languages and do not achieve the multi-step reasoning required in the harder questions, in addition to errors of following instructions alongside core reasoning tasks.)
|
The authors use a weighted mean in calculating an approximate human performance threshold but not for model performance. They take a weighted average of the annual medal thresholds for ‘Advanced’ problems.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
While the benchmark comes from authentic Linguistic Olympiad puzzles, they are still competition-style questions rather than real world usage scenarios. Hence it can be categorized as representative task of a speciflized exam setting.
|
Single cohesive phenomenon
|
No
| null | null |
Reasoning
|
Logical
| null |
['Human exams', 'Author-crafted']
|
['Convenience']
|
['Multiple choice', 'Short free response', 'Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
nasirGameTraversalBenchmarkEvaluatingPlanning2024
|
GameTraversalBenchmark: Evaluating Planning Abilities Of Large Language Models Through Traversing 2D Game Maps
|
Include
| null | null |
The paper investigates the planning capabilities of LLMs by proposing GameTraversalBenchmark (GTB), a benchmark consisting of diverse 2D grid-based game maps. The paper also provide metrics to give insights towards planning abilities in LLMs.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Planning abilities of LLMs
|
No
| null |
Subset
| null |
The task is a game based on 2D maps. They consider a generated map as one data point for the benchmark. The map’s generated objective coordinates are the points where the LLM agent needs to traverse to attain the most rewards.
|
Each item is a 2D grid-based map if alphanumeric characters.
| null |
LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
150
|
No
| null |
Unknown
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), The paper defines a reward score
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/umair-nasir14/Game-Traversal-Benchmark/
|
GameTraversalBenchmark (GTB)
|
Not defined
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean and STD
|
Outputs alone
| null | null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Reasoning
|
Planning
| null |
['LLM-generated']
|
['Unknown']
|
['Structured']
|
['Exact match', 'Reward']
|
['No definition']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['']
|
['Mean', 'Std']
|
feiLawBenchBenchmarkingLegal2024
|
LawBench: Benchmarking Legal Knowledge of Large Language Models
|
Include
| null | null |
LawBench tests 21 models on 20 Chinese legal tasks (500 instances each), which are classified along Bloom's taxonomy into knowledge memorization, understanding, and application. It is the first benchmark for the Chinese legal domain, and the first for civil law (vs. common law) jurisdictions.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
legal knowledge memorization, understanding, and application
|
Yes
|
LawBench is the first evaluation benchmark developed for the Chinese legal domain. It defines the phenomenon in terms of legal knowledge capabilities mapped to cognitive levels from Bloom’s Taxonomy.
|
Subset
|
Bloom's taxonomy for task grouping
|
Perform 20 specific legal functions using text-based input and return a defined output (of various forms, including classification label, summary, number)
|
Varies strongly between the 20 tasks, but generally: a legal input (fact description, question, judgement) and a required output of various forms.
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
10000
|
Yes
|
Task ID, blooms taxonomy level (used to indicate difficulty), task type, metric
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM post-processing (extracting answers, reformatting for automated scoring)
| null |
Most tasks adapted from existing legal datasets: CAIL, JEC_QA, and LEVEN.
|
Mostly academia, 1 research institute, 1 high school
|
Yes
| null | null |
Test
| null |
Response format varies by task. Dataset sampling above: mostly "convenience sampled"/rehashed from existing benchmarks.
|
Simple Mean
|
Yes
|
By task (each of 20), by blooms taxonomy level (each of memorization, understanding, application), by zero-shot vs. one-shot
| null |
https://github.com/open-compass/LawBench
|
LawBench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
Simple means and macro-averaging (mean across tasks, which is identical here because each task has same # of instances)
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
Validity varies strongly between tasks. Memorization tasks (2/20) do not reflect real-world human work. Most others are taken from benchmarks in QA format. Some are "partial real tasks" eg. answering legal questions scraped from a legal QA site.
|
Composite phenomenon
|
Yes
| null | null |
Law
| null | null |
['Real task', 'Author-crafted', 'Another benchmark', 'LLM-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Soft match', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean']
|
yuksekgonulWhenWhyVisionlanguage2023
|
When and Why Vision-Language Models Behave like Bags-Of-Words, and What to Do About It?
|
Include
| null | null |
This paper creates the Attribution, Relation, and Order (ARO) benchmark to systematically evaluate the ability of VLMs to understand different types of relationships, attributes, and order information. They demonstrate that VLMs can perform well on image-text retrieval over existing datasets without using the composition and order information.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Compositional understanding in VLMs
|
No
| null |
Subset
| null |
ARO consists of Visual Genome Attribution, to test the understanding of objects’ properties; Visual Genome Relation, to test for relational understanding; and COCO-Order & Flickr30k-Order, to test for order sensitivity in VLMs.
|
A sample would be an image, 1 true and 1 false statement about the image, the two objects presented in the image, the attributes of the objects
| null |
Modified from another benchmark (e.g. translation into another language)
|
28,700
|
No
| null |
Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
|
Stratification based on the four introduced tasks: 1) Visual Genome Attributions, 2) Visual Genome Relations, 3) COCO Order and 4) Flickr30k Order
| null |
https://huggingface.co/datasets/gowitheflow/ARO-Visual-Attribution
|
ARO
|
Not defined
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
macro-accuracy
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Compositional
| null |
['Another benchmark']
|
['Criterion']
|
['Multiple choice', 'Short free response']
|
['Exact match']
|
['No definition']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
xieWhodunitBenchEvaluatingLarge2024
|
WhodunitBench: Evaluating Large Multimodal
Agents via Murder Mystery Games
|
Include
| null | null |
The paper evaluates LLMs ability to participate in (and answers questions about) murder mystery games. In the arena component (agents play as either detective or murderer in a multi-agent setting), the agents are tested on win rate against the other models. The QA component is split based on capability categories (Perception, Role-Play, Decision-making and Cognition)
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
The authors evaluate four distinct capabilities: multi-modal perception, interaction, reasoning and goal achievement.
|
Yes
|
• Multi-modal Perception is the most basic ability for LMAs, which requires LMAs to perceive
information from the multimodal environment (e.g., vision and language).
• Interaction requires LMAs, whether through role-playing or direct engagement, to communicate
with the environment or other agents to gather essential information for task completion.
• Reasoning requires LMAs to combine their internal knowledge with newly gathered information
to perform long-chain, multi-step reasoning.
• Decision Making and Goal Achievement requires LMAs to establish clear goals and make
independent decisions in response to environmental changes. This autonomous decision-making is
crucial for effectively navigating and completing tasks in dynamic settings.
|
Subset
|
Since the benchmarks evaluates many things, the level of detail differs between the constructs.
|
The agent arena component is based on "winning" in a murder mystery game, whereas the Chain-of-Evaluation component is based on a QA format.
|
In the arena setting, each task item is a single murder mystery game with a winner. In the CoE, each task is a multiple-choice question.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks)
|
3000
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice, Extended interaction (e.g. conversation, calling an API and processing the response)
|
Exact Match (accuracy, F1, precision, recall), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), Win rate
| null |
The arena is based on a script, and the questions are manually annotated. The murder game scripts ccome from real sources.
|
Academia
|
Repo without any code is provided.
| null | null |
Test
|
Only reported approximately
|
CoE is multiple choice, arena is extended interaction
|
Simple Mean
|
No
| null | null |
https://github.com/jun0wanan/WhodunitBench-Murder_Mystery_Games
|
WhodunitBench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Simple mean (no variance or standard reported)
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
It is based on a pure "fictional" game, with the hope that capabilities are general enough to transfer.
|
Composite phenomenon
|
Yes
| null | null |
Agents
| null | null |
['Author-crafted', 'Crowd-sourced']
|
['Convenience']
|
['Multiple choice', 'Interaction']
|
['Exact match', 'LLM-as-a-Judge', 'Reward']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
saparinaAMBROSIABenchmarkParsing2024
|
AMBROSIA: A Benchmark for Parsing Ambiguous Questions into Database Queries
|
Include
| null | null |
Paper introduces a new benchmark dataset designed to evaluate text-to-SQL parsers' ability to handle ambiguous user requests. The dataset includes questions demonstrating scope ambiguity, attachment ambiguity, and vagueness, along with their interpretations and corresponding SQL queries. The authors highlight that existing large language models (LLMs) struggle with these ambiguities, suggesting a need for improved parser development.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
text-to-SQL parsing
|
Yes
|
Evaluation of text-to-SQL parsers capable of recognizing and interpreting ambiguous
requests
|
Comprehensive
| null |
text-to-SQL parsing, generate database, validate generated databases
|
Question, prompt, SQL query, scope/ambiguity/vagueness, generated database, score (human annotation)
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
5093
| null | null |
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), Human ratings (text quality, preference, NOT manual scoring of other metrics)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
| null | null |
https://ambrosia-benchmark.github.io/
|
AMBROSIA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
|
No
| null |
mean and variance
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Code Generation
|
Natural Language
| null |
['Author-crafted', 'LLM-generated']
|
['Targeted']
|
['Structured']
|
['Exact match', 'Human ratings']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean', 'Std']
|
augustyniakThisWayDesigning2022
|
This is the way: designing and compiling
LEPISZCZE, a comprehensive NLP benchmark for
Polish
|
Include
| null | null |
Authors introduce LEPISZCZE, a new, comprehensive benchmark for
Polish NLP with a large variety of tasks and high-quality operationalization of the
benchmark. LEPISZCZE was designed with flexibility in mind. Including new models,
datasets, and tasks is as simple as possible while still offering data versioning and
model tracking. In the first run of the benchmark, 13 experiments (task
and dataset pairs) were tested based on the five most recent LMs for Polish. Five
datasets from the Polish benchmark are reused and eight novel datasets are added.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
model performance on Polish language across various tasks (13)
| null |
The ability of language models to understand and process Polish language across a diverse range of NLP tasks, evaluated using 13 task-dataset pairs that include classification, natural language inference, and sequence labeling tasks.
|
Subset
| null |
Each task in the LEPISZCZE benchmark is defined as a standard NLP problem—such as classification, sequence labeling, or natural language inference—applied to Polish-language datasets. These tasks test specific linguistic capabilities of models, like sentiment analysis, named entity recognition, part-of-speech tagging, and others.
|
there are datasets for 13 tasks.
|
Entailment Classification, Q&A Classification, Sentiment Analysis, Paraphrase Classification, Abusive Clauses Detection, Aspect-based Sentiment Analysis , NER, POS Tagging, Political Advertising Detection, Punctuation Restoration, Punctuation Restoration.
Dialogue Acts Classification
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
30,003
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
204,504 and 9,970
| null |
Simple Mean
|
No
| null | null |
https://huggingface.co/clarin-pl , https://github.com/CLARIN-PL/LEPISZCZE
|
LEPISZCZE
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
mean and standard deviation
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
Multilinguality
| null | null |
['Real task', 'Author-crafted', 'Crowd-sourced', 'Another benchmark']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean', 'Std']
|
huiUDABenchmarkSuite2024
|
UDA: A Benchmark Suite for Retrieval Augmented Generation in Real-world Document Analysis
|
Include
| null | null |
The paper introduces the UDA (Unstructured Document Analysis) benchmark. UDA questions are expert-annotated Q&A pairs on PDF and HTML documents, constructed from datasets of academic papers, financial reports, and Wikipedia pages.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Analysing unstructured documents
|
No
|
Vague and multifaceted: "we propose a benchmark suite that enables the evaluation of various components of RAG-based unstructured document analysis"
|
Subset
| null |
LLMs are given an unstructured document and a factual question about the contents of that document. The correct answer is some extracted text or figure from the document.
|
An unstructured document might be a financial report in PDF format, containing tabular data. The question might ask for the total value of some column, like "total vested shares during the 2012 fiscal year, in millions," and correct answers might be [1.46, 1.45972].
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
29,590
|
Yes
|
topic area
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null |
Hand-written answers are "expert annotated" by the authors of six Q&A datasets; the authors curate and filter these without changing the labels.
|
Academia
|
Yes
| null | null |
Test
| null |
"Free responses" are intended to be extracted from the provided file's text.
|
Simple Mean
|
Yes
|
Scores by underlying Q&A dataset, context type (whether document chunks are provided by RAG or by human annotators)
| null |
https://github.com/qinchuanhui/UDA-Benchmark
|
UDA
|
Widely-agreed
|
Yes
|
No
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Simple mean/sum; % improvement between contexts
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Retrieval
| null | null |
['Author-crafted', 'Another benchmark']
|
['Convenience']
|
['Short free response', 'Free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['No']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean', 'Other']
|
xiaFOFOBenchmarkEvaluate2024
|
FOFO: A Benchmark to Evaluate LLMs’ Format-Following Capability
|
Include
| null | null |
FOFO Is a benchmark for domain-specific format following capabilities. It evaluates a wide array of domains and subdomains across a diverse set of formats from specific medical forms to Maple. The specific examples are generated using GPT-4 and human validation.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Format following
|
Yes
|
"precise adherence to specified formats given by humans"
|
Subset
| null |
The task is to generate dummy data in a specified format defined by detailed instructions within a given domain.
|
A single formatting instruction with a domain (e.g., Manufacturing), a subdomain (e.g., Optimization), and a format (e.g., "Standard Operating Procedures") with an example of the format.
| null |
LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
494
|
Yes
|
domain,subdomain,format
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Structured response (e.g. valid JSON, API call alone)
|
LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
| null | null |
https://github.com/SalesforceAIResearch/FoFo
|
FOFO
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null | null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
While following formatting instructions is real, the data is still dummy.
|
Composite phenomenon
|
Yes
| null | null |
Instruction Following
| null | null |
['LLM-generated']
|
['Convenience']
|
['Structured']
|
['LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
| null |
wangMINTEvaluatingLLMs2024
|
MINT: EVALUATING LLMS IN MULTI-TURN INTERACTION WITH TOOLS AND LANGUAGE FEEDBACK
|
Include
| null | null |
MINT extends existing benchmark to evaluate the effects of code interpreter usage and multi-turn feedback on LLM performance. It filters benchmark task to ones that benefit from feedback and multi-turn interactions and evaluates different feedback types from "lazy user" to "informative user" and with(out) tools.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Reasoning, coding, and decision-making
|
No
| null |
Subset
|
Each high-level phenomena is measured separately
|
The task is how performance on existing benchmarks (QA) increases when given access to GPT-4 feedback and/or code interpretor.
|
The tasks come from different benchmarks. Most are in a QA format.
| null |
Modified from another benchmark (e.g. translation into another language)
|
586
|
Yes
|
source dataset
|
Random sample (creators defined a task space and sampled from it)
|
Short free response (e.g. single word or number), Extended interaction (e.g. conversation, calling an API and processing the response)
|
Exact Match (accuracy, F1, precision, recall)
| null |
The tasks are sampled from 8 different benchmarks.
|
Academia
|
Yes
| null | null |
Test
| null |
While the expected result is often a short free response, it can be created through interaction.
|
Simple Mean
|
Yes
|
Provided by number of turns of feedback
| null |
https://github.com/xingyaoww/mint-bench
|
MINT
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
They do a partial study with actual human feedback on the benchmark tasks.
|
No
|
No
| null | null |
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Agents
|
Coding
| null |
['Another benchmark']
|
['Random']
|
['Short free response', 'Interaction']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['No']
|
['Representative']
| null |
valmeekamPlanBenchExtensibleBenchmark2023
|
PlanBench: An Extensible Benchmark for Evaluating Large Language Models on Planning and Reasoning about Change
|
Include
| null | null |
PlanBench introduces a suite of tasks relevant to planning using similar formats to the International Planning Competition. The tasks are taken from either Blocksworld or logistics and also obfuscated to avoid reliance on common-sense knowledge.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Planning
|
Yes
|
planning involves coming up with a course of actions (policy) which when executed would take an agent from a certain initial state to a desired world state
|
Subset
| null |
The main task (planning) is given a description of a state (e.g., block configuration), rules, and a goal state, come up with a plan that transforms from state the goal state. The sub-tasks are variations of components.
|
A specified state, actions, and goal state + a query for what the LLM should do (compe up with a plan, predict plan execution) etc.
|
There are in total 8 different tasks with slightly different goals (e.g., direct planning, replanning, execution prediction)
|
Procedurally-generated task examples (e.g. Creating instances from a template)
|
1910
|
Yes
|
domain
|
Random sample (creators defined a task space and sampled from it)
|
Free response (e.g. summary paragarph), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null |
The plan is a fairly structured set of actions, but not quite as structured as e.g., an API
|
Simple Mean
|
Yes
|
Domain, Obfuscated (Bool)
| null |
https://github.com/karthikv792/LLMs-Planning
|
PlanBench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
The task is based on real competition but which has a level of gaminess
|
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Planning
| null |
['Procedurally-generated']
|
['Random']
|
['Free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
| null |
zhangMELAMultilingualEvaluation2024
|
MELA: Multilingual Evaluation of Linguistic Acceptability
|
Include
| null | null |
The paper intorduces a multilingual acceptability judgement benchmark covering a diverse set of 10 languages, all annotated by expert linguists. The acceptability judgment task tests a language model’s ability to distinguish syntactically acceptable sentences from unacceptable ones in a human language. The paper establishes LLM baselines on this benchmark, and investigates cross-lingual transfer in acceptability judgements with XLM-R.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Linguistic Acceptability
|
Yes
|
The acceptability judgment task tests a language model’s ability to distinguish syntactically acceptable sentences from unacceptable ones.
|
Comprehensive
| null |
The acceptability judgment task tests a language model’s ability to distinguish syntactically acceptable sentences from unacceptable ones.
|
a sentence
| null |
hand-written by linguists in respective languages, taken from textbooks, handbooks and journal articles in theoretical syntax + some examples taken from previous benchmarks
|
46k
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall), Matthews Correlation Coefficient (MCC, Matthews), which is a measure of similarity between binary distributions taking values from -1 to 1 and always yielding 0 for any two uncorrelated distributions, regardless of class imbalance.
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
train set: 33'293, validation:3'970
| null |
Simple Mean
|
No
| null | null |
https://github.com/sjtu-compling/MELA
|
MELA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
| null |
No
|
No
|
No
| null |
simple mean and standard deviation
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Multilinguality
| null | null |
['Expert-crafted']
|
['Random']
|
['Multiple choice']
|
['Exact match', 'Correlation']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean', 'Std']
|
etxanizLatxaOpenLanguage2024
|
Latxa: An Open Language Model and Evaluation Suite for Basque
|
Include
| null | null |
The paper introduces 4 multiple-choice evaluation datasets for Basque: EusProfi-ciency, comprising 5,169 questions from official language proficiency exams; EusReading, comprising 352 reading comprehension questions; EusTrivia, comprising 1,715 trivia questions from 5 knowledge areas; and EusExams, comprising 16,774 questions from public examinations.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
language proficiency, knowledge and reasoning
|
No
| null |
Subset
|
The benchmark includes 4 different tasks
|
There are 4 tasks: reading comprehension, language proficency, mcq questions on Basque language and culture, and mcq questions on Basque government
|
an mcq question
| null |
Human exam questions (e.g. GRE questions)
|
~7.5k
|
No
| null |
Unknown
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/hitz-zentroa/latxa?tab=readme-ov-file
| null |
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
No
| null |
accuracy, F1, standard deviation
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Multilinguality
| null | null |
['Human exams']
|
['Unknown']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative']
|
['Mean', 'Std', 'Other']
|
tangStrucbenchAreLarge2024
|
Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data?
|
Include
| null | null |
The paper introduces a new benchmark to assess LLMs’ proficiency in structuring tables and introduces a novel fine-tuning method, cognizant of data structures, to bolster their performance.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Generating structured tabular data
|
Yes
|
LLMs are tasked with generating complex struc- tured tables, a process that involves understanding both the content and the specific format require- ments, such as LaTeX syntax. This task extends beyond simple text generation as it demands preci- sion not just in content creation but also in adhering to a detailed and precise structural format.
|
Comprehensive
| null |
The task is generating structured tabular data.
|
text tables, HTML tables, and LaTeX tables and their description
| null |
Modified from another benchmark (e.g. translation into another language)
|
~16k
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Structured response (e.g. valid JSON, API call alone)
|
P-Score (Prompting Score) and H-Score (Heuristical Score)
| null | null |
Academia
|
Yes
| null | null |
Test, Train
|
Train: 14.1k, Test:1700
| null |
Simple Mean
|
No
| null | null |
https://github.com/gersteinlab/Struc-Bench?tab=readme-ov-file
|
Struc-Bench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
No
| null | null |
Code Generation
| null | null |
['Another benchmark']
|
['Random']
|
['Structured']
|
['LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
riemenschneiderExploringLargeLanguage2023
|
Exploring Large Language Models for Classical Philology
|
Include
| null | null |
They define two probing tasks to investigate the knowledge acquired by models pre-trained on Classical texts. The experiments provide the first benchmarking analysis of existing models of Ancient Greek.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
| null |
No
|
The tasks are supposed to assess semantic and world knowledge in LLMs.
|
Comprehensive
| null |
Measuring semantic and world knowledge in LLMs
|
A sentence
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
~550
|
No
| null |
Unknown
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Link is provided but the data is not there
| null | null |
Test, Train
| null | null | null |
No
| null | null |
https://github.com/Heidelberg-NLP/ancient-language-models/tree/main
| null |
Not defined
| null |
Yes
|
Yes
|
No
| null |
No
|
No
|
No
| null | null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null | null | null | null | null |
Multilinguality
| null | null |
['Author-crafted']
|
['Unknown']
|
['Multiple choice']
|
['Exact match']
|
['No definition']
|
['']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
| null |
qiPreservingKnowledgeInvariance2023
|
Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction
|
Include
| null | null |
The paper introduces ROBUST, a benchmark designed to evaluate open information extraction models by measuring their ability to generalize knowledge extraction across syntactically diverse sentences that share the same semantic content.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
the generalization of open information extraction
|
Yes
|
[...] each example is a knowledge-invariant clique that consists of sentences with structured
knowledge of the same meaning but with different syntactic and expressive forms. [...] a model is judged to be robust if its performance is consistently accurate on the overall cliques.
|
Comprehensive
| null |
Open Information Extraction (OpenIE) aims to extract n-ary knowledge tuples {(a1,p,a2,...,an)} consisting of n arguments and one predicate from the natural text.
|
Sentences with arguments and one predicate form a set (clique), where sentences are semantically invariant.
|
The base task is OpenIE. Each tuple of sentence+arguments+predicate within a clique is analyzed. The "meta-task" is doing well on the worst tuple within one clique.
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
1272 cliques, 4971 sentences
|
No
| null |
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null |
n-tuples of text are extracted from the resonse.
|
Simple Mean
|
No
| null | null |
https://github.com/qijimrc/ROBUST
|
ROBUST
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
For each tuple, the F1 is computed, then across a clique the minimum is computed and aggregated across the dataset as mean.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Extraction
| null |
['Author-crafted', 'Another benchmark', 'Procedurally-generated']
|
['Random', 'Convenience']
|
['Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean']
|
shahWhenFLUEMeets2022
|
WHEN FLUE MEETS FLANG: Benchmarks and Large Pre-trained
Language Model for Financial Domain
|
Include
| null | null |
the Financial Language Understanding
Evaluation (FLUE), an open-source comprehensive
suite of benchmarks for the financial
domain. These include new benchmarks across
5 NLP tasks in financial domain as well as common
benchmarks used in the previous research.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
natural language understanding in the financial domain
|
Yes
|
The ability of LLMs to perform across 5 financial tasks such as financial sentiment analysis, news headline classification, named entity recognition, structure boundary detection, and question answering.
|
Subset
| null |
The task is defined as evaluating language models on a suite of five financial domain NLP tasks: financial sentiment analysis, news headline classification, named entity recognition, structure boundary detection, and question answering.
|
N/A, for every task there will be a respective item
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
969, 234, 2282, 302, 131, 333
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
for all 5 tasks: 19,367 and 2,674
| null |
Simple Mean
|
No
| null | null |
https://salt-nlp.github.io/FLANG/
|
FLUE
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
Simple mean: F1 scores and accuracy. MSE. nDCG and MRR. Perplexity
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
Finance
| null | null |
['Real task', 'Another benchmark']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean', 'Other']
|
kalyanWikiDONewBenchmark2024
|
WikiDO: A New Benchmark Evaluating Cross-Modal Retrieval for Vision-Language Models
|
Include
| null | null |
The authors argue that current VLM benchmarks are insufficient to assess the OOD generalization capability of models due to high visual and linguistic similarity between the evaluation and finetuning datasets. The propose WIKIDO which consists of image-text data derived from Wikipedia Diversity Observatory, a diverse source of Wikipedia articles spanning several diversity axes including geography, gender, ethnicity and domains/topics.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Generalization / OOD performance
|
No
| null |
Subset
| null |
The proposed dataset can be used for both image-to-text, i.e. retrieve the most relevant textual description(s) from a set, and text-to-image retrieval, i.e. retrieve the most relevant image(s) from a dataset.
|
A single row in the dataset will have the path of the image, the Wiki ID of the image, the reference text from Wikipedia, the title of the wikipedia article, the topic label from Wikipedia Diversity Observatory and the generated caption of the image
| null |
Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
train: 384K pairs, 2 test sets (ID and OOD) of size 3K each.
|
Yes
|
topic
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Retrieval
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
train: 384K pairs, 2 test sets (ID and OOD) of size 3K each.
| null |
Simple Mean
|
Yes
|
In-distribution vs Out-of-distribution
| null |
https://huggingface.co/datasets/Pavankalyan/WikiDO
|
WikiDO
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
The authors show that across various settings, nearly all models perform better on in-distribution (ID) data than on out-of-distribution (OOD) data, except for CLIP, which performs equally well in both settings.
|
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
No
| null | null |
Retrieval
| null | null |
['Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Targeted']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
marchisioUnderstandingMitigatingLanguage2024
|
Understanding and Mitigating Language Confusion in LLMs
|
Include
| null | null |
The paper introduces a benchmark to measure language confusion in LLMs. They investigate language confusion on the line and word level in two practical settings: a) Monolingual generation, where a user queries the LLM in a given language, implicitly requesting an answer in the same language; and b) cross-lingual generation, where a user explicitly instructs a model to generate text in a different language.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Language Confusion
|
Yes
|
LLMs are often unable to consistently generate text in the user’s desired language, or the
appropriate language given the context. They call this category of error “language confusion”.
|
Subset
| null |
They investigate language confusion on the line and word level in two practical settings: a) Monolingual generation, where a user queries the LLM in a given language, implicitly requesting an answer in the same language; and b) cross-lingual generation, where a user explicitly instructs a model to generate text in a different language.
|
A sentence (prompt)
| null |
Modified from another benchmark (e.g. translation into another language), For some part of the data they include human generated prompts
|
7100
|
Yes
|
Language of the prompt and the original data source
|
Random sample (creators defined a task space and sampled from it)
|
Free response (e.g. summary paragarph)
|
The paper introduces 2 new metrics for language confusion. Line-level pass rate (LPR) and Word-level pass rate (WPR).
| null | null |
Industry
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/for-ai/language-confusion
|
LCB
|
Contested
|
Yes
|
Yes
|
Yes
|
No
| null |
The benchmark is itself realistic
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Multilinguality
| null | null |
['Another benchmark', 'Author-crafted']
|
['Random']
|
['Free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Constructed']
|
['Mean']
|
itoGeneralizationCapacityNeural2024
|
On the generalization capacity of neural networks during generic multimodal reasoning
|
Include
| null | null |
The paper introduces gCOG, a multimodal reasoning dataset designed to measure various types of OOD generalisation (distractor generalisation, systematic compositional, and productive compositional). The authors train various encoder architectures from scratch and compare their performances. Transformers can systematically generalise at scale, but no architectures can productively generalise.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Multimodal generalisation
|
Yes
|
"OOD generalization – the ability to perform tasks beyond the training distribution" (1)
|
Comprehensive
| null |
Models are given an 8x8 grid containing multicoloured letters at different indices, and must follow a binary tree of "if-then-else" instructions to answer a question like "Get the position of the orange 't'".
|
A query in natural language, an image of an 8x8 grid in some .jpg-like format, and a correct answer, which is either a shape ("d") a colour ("orange") or a location ((5, 4)).
|
The concrete dataset used for their evaluation is not provided, only a generator object in python is given.
|
Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
Yes
|
task tree depth, num distractors
|
Random sample (creators defined a task space and sampled from it), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Industry
|
Yes
| null | null | null | null | null |
Simple Mean
|
Yes
|
IID and OOD accuracy on varying numbers of distractors and tree depths
| null |
https://github.com/IBM/gcog
|
Generic COG (gCOG)
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
"Identifying neural architectures that can robustly generalize OOD is a central goal in artificial intelligence. Compositional generalization benchmarks, which explicitly evaluate for generalization, provide a good testbed for measuring these capabilities" (9)
|
simple mean/sum
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Language Modelling
|
Adaptability
| null |
['Another benchmark', 'Procedurally-generated']
|
['Random', 'Criterion']
|
['Short free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
liMultimodalArXivDataset2024
|
Multimodal ArXiv: A Dataset for Improving Scientific Comprehension of Large Vision-Language Models
|
Include
| null | null |
Multimodal ArXiv consists of ArXivCap, a figure-caption dataset sourced from scientific papers, and ArXivQA, a QA dataset generated by prompting GPT-4V for QA pairs on ArXivCap entries. Results show that fine-tuning on these datasets boosts performance on the MathVista benchmark, and that evaluation results for various scientific plot comprehension subtasks are poor.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
comprehending scientific plots
|
No
| null |
Subset
|
The phenomenon is vaguely defined but the tasks are precisely defined
|
Vision-to-text subtasks: caption a single (or multiple) scientific figure(s), including an in-context learning subtask, and generate paper titles given figures and captions.
|
A ground truth paper title and a list of scientific figures and corresponding captions
| null |
Real task examples (e.g. GitHub issues), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
100,000
|
Yes
|
arXiv domain, arXiv DOI
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
n-gram (BLEU, ROUGE, chrF), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://huggingface.co/datasets/MMInstruction/ArxivQA; https://huggingface.co/datasets/MMInstruction/ArxivCap
|
Multimodal ArXiv
|
Not defined
|
Yes
| null |
Yes
|
Yes
|
No
|
The benchmark is itself realistic
|
Yes
|
Yes
|
"after training the model on QA pairs from each domain... Most domains hurt the Figure QA task. This suggests that synthetic Figure QA might not be the best benchmark for assessing realistic reasoning ability." (14373-4)
"our Multimodal ArXiv dataset sources from ArXiv papers due to their accessibility and open-source licenses. This approach may overlook the diversity of disciplines and data modalities present in the broader scientific literature." (14378)
|
simple mean/sum
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
VQA
|
Understanding
| null |
['Real task', 'LLM-generated']
|
['Targeted']
|
['Short free response', 'Free response']
|
['Soft match', 'LLM post-processing']
|
['No definition']
|
['Yes']
|
['']
|
['Realistic']
|
['Yes']
|
['Partial']
|
['Mean']
|
zouVGBenchEvaluatingLarge2024
|
VGBench: Evaluating Large Language Models on Vector Graphics Understanding and Generation
|
Include
| null | null |
The paper introduces VGBench, a comprehensive benchmark for vector graphics images that tests both visual understanding and generation. Formats like SVG, TikZ, and Graphviz are included, and performance is generally strong, though LLMs do worse with the lower-level SVG format.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
processing vector graphics
|
No
| null |
Comprehensive
| null |
For the QA task (VGQA), models are given a vector graphics representation (in textual format) and a multiple choice question about a high-level feature of the image, like the colour of a depicted entity. For the generation task (VGen), models must generate vector graphics code from a textual description.
|
For VGQA: a snippet of vector graphics code, a question with multiple choice answers, and a correct answer.
For VGen: a textual description, the desired output format (e.g. SVG), and some ground truth vector graphics code.
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
10,124
|
Yes
|
vector graphic format
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice, Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Academia
|
Yes
| null | null |
Test
|
4,279 examples in VGQA, 5,845 examples in VGen
| null |
Simple Mean
|
Yes
|
vector graphics format and question subtype (e.g. "Domain", "Layout", "Relation" questions)
| null |
https://huggingface.co/datasets/vgbench/VGen; https://huggingface.co/datasets/vgbench/VGQA
|
VGBench
|
Widely-agreed
|
Yes
|
Yes
|
No
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
Instruction Following
| null | null |
['Real task', 'Another benchmark', 'LLM-generated']
|
['Convenience']
|
['Multiple choice', 'Structured']
|
['Exact match', 'Soft match', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
zhangXSemPLRCrosslingualSemantic2023
|
XSemPLR: Cross-Lingual Semantic Parsing in Multiple Natural Languages and Meaning Representations
|
Include
| null | null |
The paper introduces XSEMPLR, a unified benchmark for cross-lingual semantic parsing featuring 22 natural languages and 8 meaning representations by examining and selecting 9 existing datasets to cover 5 tasks and 164 domains. They use XSEMPLR to conduct a benchmark study on a wide range of multilingual language models, including encoder-based models (mBERT, XLM-R), encoder-decoder models (mBART, mT5), and
decoder-based models (Codex, BLOOM). The findings show that large multilingual
language models are still inadequate for performing CLSP tasks. They also find that the performance gap between monolingual training and cross-lingual transfer learning is still significant for multilingual models, though it can be mitigated by cross-lingual few-shot training.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
cross-lingual semantic parsing
|
Yes
|
Cross-Lingual Semantic Parsing (CLSP) aims to translate queries in multiple natural languages (NLs) into meaning representations (MRs).
|
Comprehensive
| null |
The task is to train a model to convert a sentence in natural language into a meaning representation (e.g., SQL, programming code, Prolog, Functional Query Language, etc.).
|
A pair of input and output where input is a text in natural language and output is a text of input's meaning representation
| null |
Modified from another benchmark (e.g. translation into another language)
|
Train set: ~42k, test set: ~7500, Dev set: ~5500
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/psunlpgroup/XSemPLR
|
XSEMPLR
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Multilinguality
| null | null |
['Another benchmark']
|
['Random']
|
['Free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
sunInformalLanguageProcessing2024
|
Toward Informal Language Processing: Knowledge of Slang in Large Language Models
|
Include
| null | null |
Using movie subtitles, the authors construct a dataset that supports evaluation on a diverse
set of tasks pertaining to the automatic processing of slang. For both evaluation and finetuning, they show the effectiveness of their dataset on two core applications: 1) slang detection, and 2) identification of regional and historical sources of slang from natural sentences.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
informal language processing (Knowledge of slang in LLMs)
|
No
|
They focus on two core tasks for informal language processing. First, they evaluate the extent to which LLMs can reliably detect slang usages in natural sentences. Second,
they assess whether LLMs can be used to identify regional-historical sources of slang via a text classification task.
|
Subset
| null |
Task1: Given a set of sentences, they evaluate slang detection at both sentence-level and word-level.
Task2: Given a sentence containing a slang usage, they ask the model to classify its source (e.g. region and age).
|
a sentence of natural language
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks)
|
25,000
|
Yes
|
Annotator confidence, Movie ID, Region, Year
|
Random sample (creators defined a task space and sampled from it)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall), They also report two metrics to compare an LLM’s predictive confidence in slang usages relative to their literal counterparts.
| null |
The benchmark is build on top of OpenSubtitles corpus.
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/amazon-science/slang-llm-benchmark
| null |
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Multilinguality
| null | null |
['Crowd-sourced']
|
['Random']
|
['Multiple choice']
|
['Exact match', 'Correlation']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
wangPretrainingLanguageModel2023
|
ON PRE-TRAINED LANGUAGE MODELS FOR ANTIBODY
|
Include
| null | null |
This paper introduces the AnTibody Understanding Evaluation (ATUE) benchmark to systematically assess the representation capabilities of general and antibody-specific pre-trained language models across a range of antibody-related tasks. It also explores how incorporating biological mechanisms into pre-training can enhance model performance and evaluates the transferability of learned representations to real-world applications such as drug discovery and immune system analysis.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
LLMs capability to do antibody representation learning and biological reasoning with sequence specificity
|
Yes
|
how LLMs perform in antibody tasks with different specificity and how introducing specific biological mechanisms to the pre-training process can benefit the model. Additionally, authors evaluate if the learned antibody pre-trained representations can be applied to real-world antibody problems, like drug discovery and immune process understanding.
|
Subset
| null |
Evaluate the ability of pre-trained language models to perform on four supervised antibody-related prediction tasks—antigen binding, paratope prediction, B cell maturation classification, and SARS-CoV-2 antibody discovery—each varying in antibody specificity. These tasks assess whether the models can capture biologically meaningful information from antibody sequences.
|
N/A there are four tasks
| null |
Real task examples (e.g. GitHub issues)
|
3242, 1662, 88094, 22000
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), Matthews Correlation Coefficient (MCC), and AUC (Area Under the ROC Curve)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
15,128/3,242 , N/A
| null |
Simple Mean
|
No
| null | null |
https://github.com/dqwang122/EATLM
|
ATUE
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
Biology
| null | null |
['Real task']
|
['Convenience', 'Targeted', 'Criterion']
|
['Structured']
|
['Exact match', 'Correlation']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
| null |
bajpaiCanLLMsReplace2024
|
Can LLMs replace Neil deGrasse Tyson? Evaluating the Reliability of LLMs as Science Communicators
|
Include
| null | null |
This paper focuses on evaluating the reliability of current LLMs as science communicators. They introduce a dataset, SCiPS-QA, comprising 742 Yes/No queries embedded in complex
scientific concepts, along with a benchmarking suite that evaluates LLMs for correctness and consistency across various criteria. They also benchmark three proprietary LLMs from the OpenAI GPT family and 13 open-access LLMs from the Meta Llama-2, Llama-3, and Mistral families.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Reliability of LLMs as Science Communicators
|
No
|
Can existing LLMs answer scientific reasoning questions successfully and faithfully that require understanding the nuances of scientific knowledge?
|
Comprehensive
| null |
A binary (yes/No) classification task where the model is asked to answer a scientific question.
|
A question in science
| null |
Not explained
|
742
|
Yes
|
topic, date
|
Unknown
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/Prasoon1207/llm-science-miscommunication/blob/main/data/data.csv
|
SCiPS-QA
|
Not defined
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
Simple mean and standard deviation
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
General Science
| null | null |
['Unknown']
|
['Unknown']
|
['Multiple choice']
|
['Exact match']
|
['No definition']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean', 'Std']
|
hauserLargeLanguageModelsExpertlevel2024
|
Large Language Models' Expert-level Global History Knowledge Benchmark (HiST-LLM)
|
Include
| null | null |
The paper introduces the History Seshat Test for LLMs (HiST-LLM), based on a subset of the Seshat Global History Databank, which provides a structured representation of human historical knowledge, containing 36,000 data points across 600 historical societies and over
2,700 scholarly references. Using this dataset, they benchmark a total of seven models from the Gemini, OpenAI, and Llama families.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
LLM's Expert-level Global History Knowledge
|
No
|
The ability of the model to answer expert-level history questions.
|
Comprehensive
| null |
The ask is to ask the model a multiple-choice question about history.
|
A multiple-choice question
| null |
Human expert created the examples
|
36000
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/seshat-db/HiST-LLM
|
HiST-LLM
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Mean and standard deviation
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
History
| null | null |
['Expert-crafted']
|
['Random']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean', 'Std']
|
sadatMSciNLIDiverseBenchmark2024
|
MSciNLI: A Diverse Benchmark for Scientific Natural Language Inference
|
Include
| null | null |
This paper introduces MSCINLI, a new dataset comprising 132,320 sentence pairs from five diverse scientific domains to enhance the study of scientific Natural Language Inference (NLI). Baseline models, including fine-tuned and prompted LLMs, reveal the dataset's challenging nature, as well as performance degradation due to domain shifts, highlighting the unique characteristics of each domain. Additionally, employing both scientific NLI datasets in intermediate task transfer learning showcases improvements in downstream scientific tasks.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Natural language inference (semantic relationship between two sentences), scientific domains
|
Yes
|
predicting the semantic relation between two sentences extracted from research articles
|
Comprehensive
| null |
sentence pairs, multiple choice on semantic relation between sentences
| null |
question, prompt, domain, class, difficulty, response correct/score
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
127,320
|
Yes
|
difficulty, domain
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test, Train
| null | null |
Simple Mean
|
Yes
|
difficulty
| null |
GitHub, huggingface
|
MSciNLI
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
No
| null |
mean and variance, t-tests
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
General Science
| null | null |
['Author-crafted', 'Another benchmark']
|
['Targeted', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['No']
|
['Representative']
|
['Mean', 'Std', 'Tests']
|
dengNewTermBenchmarkingRealtime2024
|
NewTerm: Benchmarking Real-Time New Terms for Large Language Models with Annual Updates
|
Include
| null | null |
This paper introduces NewTerm, an adaptive benchmark designed for the real-time evaluation of new terms in large language models (LLMs) to address their struggle with real-time information due to knowledge cutoffs. The benchmark is constructed using a highly automated method allowing flexible and minimal human effort updates, revealing a performance reduction of over 20% on various LLMs with new terms and highlighting difficulties in generalizing to distant new terms. Annual updates to NewTerm, starting with 2022 and 2023, are planned to continuously assess and analyze the evolving challenge of new terms in LLMs.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Updating of knowledge, real-time evaluation of new terms introduced after knowledge cutoff
|
Yes
|
flexible updates for real-time information
|
Comprehensive
| null |
Answer questions about new terms from dictionary, introduced after knowledge cutoff
|
Question, multiple choice answers, response, correct
| null |
Real task examples (e.g. GitHub issues), Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
Yes
|
Domains: The Choice of Multiple Alter (COMA), The Choice of Similar Terms (COST), Common Sense Judgement (CSJ)
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Domains: The Choice of Multiple Alter (COMA), The Choice of Similar Terms (COST), Common Sense Judgement (CSJ)
| null |
GitHub
|
NewTerm
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Language Modelling
|
Updating
| null |
['Real task', 'Procedurally-generated']
|
['Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative']
|
['Mean']
|
yeRoTBenchMultilevelBenchmark2024
|
RoTBench: A Multi-Level Benchmark for Evaluating the Robustness of Large Language Models in Tool Learning
|
Include
| null | null |
LLMs are increasingly deployedin settings where they can use tools, e.g. call functions to retrieve real-time information on weather. This paper proposes benchmark measuring the robustness of LLMs in selecting tools when these are specified under noise (e.g. the function name is perturbed).
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
tool use when tool names or arguments are mislabeled
|
No
|
LLMs should exhibit consistent tool use when tools or their arguments are mislabeled.
|
Subset
| null | null |
Prompt + List of availabe tools + ground truth tool + ground truth arguments
| null |
Procedurally-generated task examples (e.g. Creating instances from a template)
|
735
|
No
| null |
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible)
|
Free response (e.g. summary paragarph), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null |
existing benchmark + small perturbations
|
Academia
|
Yes
| null |
A) The noise induced in the benchmark significantly alters the *expected behaviour* of the model. For instance, imagine "Get_GPS_COORDINATES : This tool is used for fetching information weather for specified location." is a perturbation of "Get_WEATHER: This tool is used for fetching infromation weather for specified location." Clearly, the inconsistent information provided to the LLM between the function name and its docstring changes the expected behaviour of the model and hence "consistent" behaviour is not necessarily a sign of robustness. This casts doubt on the construct validity of “Robust Tool Use”. A positive note: The authors test human perofrmance and humans get scores between 69% and 89%, showing the task is still somewhat possible to humans.
B) The authors built their dataset by perturbing an existing dataset. their explanations of the existing dataset are negligle. It should be best practice to at least explain what the task of the original dataset is exactly, its size and limitations.
|
Test, Train
| null | null |
Simple Mean
|
Yes
|
different intermediate stages to a full sucess.
| null |
https://github.com/Junjie-Ye/RoTBench
|
RoTBench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null | null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Agents
|
Tool Use
| null |
['Procedurally-generated']
|
['Random', 'Convenience']
|
['Free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
| null |
maMMLONGBENCHDOCBenchmarkingLongcontext2024
|
MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations
|
Include
| null | null |
The paper presents a long-context multimodal benchmark dataset of more than 1k expert annotated questions over long PDFs which require aggregating evidence across multiple locations and evidence formats (text, image, charts, etc.) to answer. MMLongBench-Doc presents a challenge for strong models such as GPT-4o and other large vision language models (LVLMs), demonstrating the need for improved long-context LVLM capabilities.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
long-context document understanding
|
Yes
|
"the automatic understanding of [long-context] documents. The understanding of these lengthy documents brings new challenges for LVLMs", including localization and cross-page comprehension
|
Comprehensive
| null |
Give a document to a model and have it answer a question regarding information in the document.
|
Documents are PDF files. Questions are stored in json format with the following attributes: document ID, document type, question, answer, evidence pages, evidence sources, and answer format.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
1082
|
Yes
|
evidence source, answer format, question length statistics, answer length statistics, document length statistics
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
type of evidence source, number of evidence pages involved in answering the question, document type
| null |
https://github.com/mayubo2333/MMLongBench-Doc
|
MMLongBench-Doc
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Author-crafted', 'Another benchmark', 'Procedurally-generated']
|
['Targeted', 'Criterion']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
| null |
kuratovBABILongTestingLimits2024
|
BABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack
|
Include
| null | null |
The BABILong benchmark tests language models’ ability to reason across facts distributed in extremely long documents in the reasoning setting, scattering relevant facts among less relevant natural text. The paper finds LLMs only effectively use less than 20% of the context in such settings, with reasoning complexity negatively impacting performance. Multiple methods including in-context reasoning, retrieval augmented generation, and context extension are applied to profile model capabilities in these long-context tasks.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
language models’ ability to reason across facts distributed in extremely long documents
|
Yes
|
"language models’ ability to reason across facts distributed in extremely long documents"
|
Comprehensive
| null |
Perform one of 20 reasoning tasks (e.g., fact chaining, simple induction, deduction, counting, and handling lists/sets), generally presented in question format, given a long context with relevant and distracting articles.
|
A long-context input text, question, and the question's answer based on the input
| null |
Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
Yes
|
facts per task, relevant facts per task, reasoning task type
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
|
input length, task type, context size
| null |
https://github.com/booydar/babilong
|
BABILong
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
Advantages of the benchmark are compared versus existing related benchmarks based on design and correlation study, and the content of the benchmark and the relation between model performance and capability are analyzed.
|
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Another benchmark', 'Procedurally-generated']
|
['Targeted', 'Criterion']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
|
['Mean']
|
wangAdaLEvalEvaluatingLongcontext2024
|
Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks
|
Include
| null | null |
Ada-LEval presents a length-adaptable benchmark for long-context understanding capabilities of LLMs, involving challenging questions for reliable evaluation and context lengths extending to the ultra-long setting. SOTA open and closed models are evaluated to demonstrate current limitations of LLMs in such settings.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
| null |
No
|
Context window is a notable factor in LLM performance and is critical to handling long texts. The effectiveness of LLMs in managing long text is still open for exploration and assessment.
|
Comprehensive
| null |
1. Take in a long text and arrange the text segments in the correct order.
2. Choose the best answer from multiple candidate answers to a question based on a given long text.
|
Not provided, but generally the task samples consist of either a question and many sample answers, or a series of texts to be rearranged (per the task definition).
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
over 80k
|
Yes
|
total samples per context length, max tokens, average number of tokens
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation), instruction following rate
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
context lengths from 2k to 16k
| null |
https://github.com/open-compass/Ada-LEval
|
Ada-LEval
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
Comparison with traditional long-context benchmarks such as GovReport demonstrate Ada-LEval requires more overall text understanding to complete.
|
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Real task', 'Another benchmark', 'Procedurally-generated']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Free response']
|
['Exact match', 'Distribution', 'Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
|
['Mean']
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 10