|
--- |
|
language: en |
|
license: mit |
|
library_name: transformers |
|
tags: |
|
- economics |
|
- finance |
|
- bert |
|
- language-model |
|
- financial-nlp |
|
- economic-analysis |
|
datasets: |
|
- custom_economic_corpus |
|
metrics: |
|
- accuracy |
|
- f1 |
|
- precision |
|
- recall |
|
pipeline_tag: fill-mask |
|
--- |
|
|
|
# EconBERT |
|
|
|
## Model Description |
|
|
|
EconBERT is a BERT-based language model specifically fine-tuned for economic and financial text analysis. The model is designed to capture domain-specific language patterns, terminology, and contextual relationships in economic literature, research papers, financial reports, and related documents. |
|
|
|
> **Note**: The complete details of model architecture, training methodology, evaluation, and performance metrics are available in our paper. Please refer to the citation section below. |
|
|
|
## Intended Uses & Limitations |
|
|
|
### Intended Uses |
|
|
|
- **Economic Text Classification**: Categorizing economic documents, papers, or news articles |
|
- **Sentiment Analysis**: Analyzing market sentiment in financial news and reports |
|
- **Information Extraction**: Extracting structured data from unstructured economic texts |
|
- etc. |
|
|
|
### Limitations |
|
|
|
- The model is specialized for economic and financial domains and may not perform as well on general text |
|
- Performance may vary on highly technical economic sub-domains not well-represented in the training data |
|
- For detailed discussion of limitations, please refer to our paper |
|
|
|
## Training Data |
|
|
|
EconBERT was trained on a large corpus of economic and financial texts. For comprehensive information about the training data, including sources, size, preprocessing steps, and other details, please refer to our paper. |
|
|
|
## Evaluation Results |
|
|
|
We evaluated EconBERT on several economic NLP tasks and compared its performance with general-purpose and other domain-specific models. The detailed evaluation methodology and complete results are available in our paper. |
|
|
|
Key findings include: |
|
- Improved performance on economic domain tasks compared to general BERT models |
|
- State-of-the-art results on [specific tasks, if applicable] |
|
- [Any other high-level results worth highlighting] |
|
|
|
## How to Use |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModel |
|
|
|
# Load model and tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained("YourUsername/EconBERT") |
|
model = AutoModel.from_pretrained("YourUsername/EconBERT") |
|
|
|
# Example usage |
|
text = "The Federal Reserve increased interest rates by 25 basis points." |
|
inputs = tokenizer(text, return_tensors="pt") |
|
outputs = model(**inputs) |
|
``` |
|
|
|
For task-specific fine-tuning and applications, please refer to our paper and the examples provided in our GitHub repository. |
|
|
|
## Citation |
|
|
|
If you use EconBERT in your research, please cite our paper: |
|
|
|
```bibtex |
|
@article{LastName2025econbert, |
|
title={EconBERT: A Large Language Model for Economics}, |
|
author={Zhang, Philip and Rojcek, Jakub and Leippold, Markus}, |
|
journal={SSRN Working Paper}, |
|
year={2025}, |
|
volume={}, |
|
pages={}, |
|
publisher={University of Zurich}, |
|
doi={} |
|
} |
|
``` |
|
|
|
## Additional Information |
|
|
|
- **Model Type**: BERT |
|
- **Language(s)**: English |
|
- **License**: MIT |
|
|
|
For more detailed information about model architecture, training methodology, evaluation results, and applications, please refer to our paper. |