Update README.md
Browse files
README.md
CHANGED
@@ -57,32 +57,40 @@ configs:
|
|
57 |
---
|
58 |
# BizFinBench: A Business-Driven Real-World Financial Benchmark for Evaluating LLMs
|
59 |
|
60 |
-
|
61 |
|
62 |
-
|
63 |
|
64 |
-
Large language models excel across general tasks, yet judging their reliability in logic‑heavy, precision‑critical domains such as finance, law and healthcare is still difficult. To address this challenge, we propose BizFinBench
|
65 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
This dataset contains multiple subtasks, each focusing on a different financial understanding and reasoning ability, as follows:
|
67 |
|
68 |
-
| Dataset
|
69 |
-
|
|
70 |
-
| **Anomalous Event Attribution**
|
71 |
-
| **Financial Numerical Computation**
|
72 |
-
| **Financial Time Reasoning**
|
73 |
-
| **Financial Data Description**
|
74 |
-
| **Stock Price Prediction**
|
75 |
-
| **Financial Named Entity Recognition** | A financial named entity recognition dataset assessing models' ability to identify entities (Person, Organization, Market, Location, Financial Products, Date/Time) in short/long financial news. | Recognition accuracy, entity category correctness
|
76 |
-
| **Emotion_Recognition**
|
77 |
-
| **Financial Tool Usage**
|
78 |
-
| **Financial Knowledge QA**
|
79 |
|
80 |
## Performance Leaderboard
|
81 |
-
|
82 |
The models are evaluated across multiple tasks, with results color-coded to represent the top three performers for each task:
|
83 |
-
- 🥇
|
84 |
-
- 🥈
|
85 |
-
- 🥉
|
86 |
|
87 |
| Model | AEA | FNC | FTR | FTU | FQA | FDD | ER | SP | FNER | Average |
|
88 |
|--------------------------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|
|
@@ -112,9 +120,4 @@ The models are evaluated across multiple tasks, with results color-coded to repr
|
|
112 |
| DeepSeek-R1 (671B) | 80.36 | 🥇 64.04 | 🥉 75.00 | 81.96 | 🥇 91.44 | 98.41 | 39.67 | 55.13 | 🥇 71.46 | 🥈 73.05 |
|
113 |
| QwQ-32B | 84.02 | 52.91 | 64.90 | 84.81 | 89.60 | 94.20 | 34.50 | 🥈 56.68 | 30.27 | 65.77 |
|
114 |
| DeepSeek-R1-Distill-Qwen-14B | 71.33 | 44.35 | 16.95 | 81.96 | 85.52 | 92.81 | 39.50 | 50.20 | 52.76 | 59.49 |
|
115 |
-
| DeepSeek-R1-Distill-Qwen-32B | 73.68 | 51.20 | 50.86 | 83.27 | 87.54 | 97.81 | 41.50 | 53.92 | 56.80 | 66.29 |
|
116 |
-
|
117 |
-
## 💡 Highlights
|
118 |
-
- 🔥 **Benchmark:** We propose **BizFinBench**, the first evaluation benchmark in the financial domain that integrates business-oriented tasks, covering 5 dimensions and 9 categories. It is designed to assess the capacity of LLMs in real-world financial scenarios.
|
119 |
-
- 🔥 **Judge model:** We design a novel evaluation method, i.e., **Iterajudge**, which enhances the capability of LLMs as a judge by refining their decision boundaries in specific financial evaluation tasks.
|
120 |
-
- 🔥 **key insights:** We conduct a comprehensive evaluation with **25 LLMs** based on BizFinBench, uncovering key insights into their strengths and limitations in financial applications.
|
|
|
57 |
---
|
58 |
# BizFinBench: A Business-Driven Real-World Financial Benchmark for Evaluating LLMs
|
59 |
|
60 |
+
📖<a href="https://arxiv.org/abs/25xx.xxxxx">Paper (coming soom)</a> |🐙<a href="https://hithink-research.github.io/BizFinBench/">Github</a></h3>|🤗<a href="https://huggingface.co/datasets/HiThink-Research/BizFinBench">Huggingface</a></h3>
|
61 |
|
62 |
+
In recent years, multimodal benchmarks for general domains have guided the rapid development of multimodal models on general tasks. However, the financial field has its peculiarities. It features unique graphical images (e.g., candlestick charts, technical indicator charts) and possesses a wealth of specialized financial knowledge (e.g., futures, turnover rate).
|
63 |
|
64 |
+
Large language models excel across general tasks, yet judging their reliability in logic‑heavy, precision‑critical domains such as finance, law and healthcare is still difficult. To address this challenge, we propose **BizFinBench**, the first benchmark grounded in real-world financial applications. BizFinBench consists of **6,781** well-annotated queries in Chinese, covering five dimensions: numerical calculation, reasoning, information extraction, prediction recognition and knowledge‐based question answering, which are mapped to nine fine-grained categories.
|
65 |
|
66 |
+
## 📢 News
|
67 |
+
- 🚀 [16/05/2025] We released <strong>BizFinBench</strong> benchmark (V1), the first benchmark grounded in real-world financial applications.
|
68 |
+
|
69 |
+
## 💡 Highlights
|
70 |
+
- 🔥 **Benchmark:** We propose **BizFinBench**, the first evaluation benchmark in the financial domain that integrates business-oriented tasks, covering 5 dimensions and 9 categories. It is designed to assess the capacity of LLMs in real-world financial scenarios.
|
71 |
+
- 🔥 **Judge model:** We design a novel evaluation method, i.e., **Iterajudge**, which enhances the capability of LLMs as a judge by refining their decision boundaries in specific financial evaluation tasks.
|
72 |
+
- 🔥 **key insights:** We conduct a comprehensive evaluation with **25 LLMs** based on BizFinBench, uncovering key insights into their strengths and limitations in financial applications.
|
73 |
+
|
74 |
+
## 📕 Data Distrubution
|
75 |
This dataset contains multiple subtasks, each focusing on a different financial understanding and reasoning ability, as follows:
|
76 |
|
77 |
+
| Dataset | Description | Evaluation Dimensions | Volume |
|
78 |
+
| -------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------ |
|
79 |
+
| **Anomalous Event Attribution** | A financial anomaly attribution evaluation dataset assessing models' ability to trace stock fluctuations based on given information (e.g., timestamps, news articles, financial reports, and stock movements). | Causal consistency, information relevance, noise resistance | 1,064 |
|
80 |
+
| **Financial Numerical Computation** | A financial numerical computation dataset evaluating models' ability to perform accurate numerical calculations in financial scenarios, including interest rate calculations, gain/loss computations, etc. | Calculation accuracy, unit consistency | 581 |
|
81 |
+
| **Financial Time Reasoning** | A financial temporal reasoning evaluation dataset assessing models' ability to comprehend and reason about time-based financial events, such as "the previous trading day" or "the first trading day of the year." | Temporal reasoning correctness | 514 |
|
82 |
+
| **Financial Data Description** | A financial data description evaluation dataset measuring models' ability to analyze and describe structured/unstructured financial data, e.g., "the stock price first rose to XX before falling to XX." | Trend accuracy, data consistency | 1,461 |
|
83 |
+
| **Stock Price Prediction** | A stock price movement prediction dataset evaluating models' ability to forecast future stock price trends based on historical data, financial indicators, and market news. | Trend judgment, causal rationality | 497 |
|
84 |
+
| **Financial Named Entity Recognition** | A financial named entity recognition dataset assessing models' ability to identify entities (Person, Organization, Market, Location, Financial Products, Date/Time) in short/long financial news. | Recognition accuracy, entity category correctness | 433 |
|
85 |
+
| **Emotion_Recognition** | A financial sentiment recognition dataset evaluating models' ability to discern nuanced user emotions in complex financial market environments. Inputs include multi-dimensional data such as market conditions, news, research reports, user holdings, and queries, covering six emotion categories: optimism, anxiety, pessimism, excitement, calmness, and regret. | Emotion classification accuracy, implicit information extraction and reasoning correctness | 600 |
|
86 |
+
| **Financial Tool Usage** | A financial tool usage dataset evaluating models' ability to understand user queries and appropriately utilize various financial tools (investment analysis, market research, information retrieval, etc.) to solve real-world problems. Tools include calculators, financial encyclopedia queries, search engines, data queries, news queries, economic calendars, and company lookups. Models must accurately interpret user intent, select appropriate tools, input correct parameters, and coordinate multiple tools when necessary. | Tool selection rationality, parameter input accuracy, multi-tool coordination capability | 641 |
|
87 |
+
| **Financial Knowledge QA** | A financial encyclopedia QA dataset assessing models' understanding and response accuracy regarding core financial knowledge, covering key domains: financial fundamentals, markets, investment theories, macroeconomics, etc. | Query comprehension accuracy, knowledge coverage breadth, answer accuracy and professionalism | 990 |
|
88 |
|
89 |
## Performance Leaderboard
|
|
|
90 |
The models are evaluated across multiple tasks, with results color-coded to represent the top three performers for each task:
|
91 |
+
- 🥇 indicates the top-performing model.
|
92 |
+
- 🥈 represents the second-best result.
|
93 |
+
- 🥉 denotes the third-best performance.
|
94 |
|
95 |
| Model | AEA | FNC | FTR | FTU | FQA | FDD | ER | SP | FNER | Average |
|
96 |
|--------------------------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|
|
|
|
120 |
| DeepSeek-R1 (671B) | 80.36 | 🥇 64.04 | 🥉 75.00 | 81.96 | 🥇 91.44 | 98.41 | 39.67 | 55.13 | 🥇 71.46 | 🥈 73.05 |
|
121 |
| QwQ-32B | 84.02 | 52.91 | 64.90 | 84.81 | 89.60 | 94.20 | 34.50 | 🥈 56.68 | 30.27 | 65.77 |
|
122 |
| DeepSeek-R1-Distill-Qwen-14B | 71.33 | 44.35 | 16.95 | 81.96 | 85.52 | 92.81 | 39.50 | 50.20 | 52.76 | 59.49 |
|
123 |
+
| DeepSeek-R1-Distill-Qwen-32B | 73.68 | 51.20 | 50.86 | 83.27 | 87.54 | 97.81 | 41.50 | 53.92 | 56.80 | 66.29 |
|
|
|
|
|
|
|
|
|
|