File size: 3,226 Bytes
2e8a30b a0bdbea 2e8a30b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
---
language: en
license: mit
library_name: transformers
tags:
- economics
- finance
- bert
- language-model
- financial-nlp
- economic-analysis
datasets:
- custom_economic_corpus
metrics:
- accuracy
- f1
- precision
- recall
pipeline_tag: fill-mask
---
# EconBERT
## Model Description
EconBERT is a BERT-based language model specifically fine-tuned for economic and financial text analysis. The model is designed to capture domain-specific language patterns, terminology, and contextual relationships in economic literature, research papers, financial reports, and related documents.
> **Note**: The complete details of model architecture, training methodology, evaluation, and performance metrics are available in our paper. Please refer to the citation section below.
## Intended Uses & Limitations
### Intended Uses
- **Economic Text Classification**: Categorizing economic documents, papers, or news articles
- **Sentiment Analysis**: Analyzing market sentiment in financial news and reports
- **Information Extraction**: Extracting structured data from unstructured economic texts
- etc.
### Limitations
- The model is specialized for economic and financial domains and may not perform as well on general text
- Performance may vary on highly technical economic sub-domains not well-represented in the training data
- For detailed discussion of limitations, please refer to our paper
## Training Data
EconBERT was trained on a large corpus of economic and financial texts. For comprehensive information about the training data, including sources, size, preprocessing steps, and other details, please refer to our paper.
## Evaluation Results
We evaluated EconBERT on several economic NLP tasks and compared its performance with general-purpose and other domain-specific models. The detailed evaluation methodology and complete results are available in our paper.
Key findings include:
- Improved performance on economic domain tasks compared to general BERT models
- State-of-the-art results on [specific tasks, if applicable]
- [Any other high-level results worth highlighting]
## How to Use
```python
from transformers import AutoTokenizer, AutoModel
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("YourUsername/EconBERT")
model = AutoModel.from_pretrained("YourUsername/EconBERT")
# Example usage
text = "The Federal Reserve increased interest rates by 25 basis points."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
```
For task-specific fine-tuning and applications, please refer to our paper and the examples provided in our GitHub repository.
## Citation
If you use EconBERT in your research, please cite our paper:
```bibtex
@article{LastName2025econbert,
title={EconBERT: A Large Language Model for Economics},
author={Zhang, Philip and Rojcek, Jakub and Leippold, Markus},
journal={SSRN Working Paper},
year={2025},
volume={},
pages={},
publisher={University of Zurich},
doi={}
}
```
## Additional Information
- **Model Type**: BERT
- **Language(s)**: English
- **License**: MIT
For more detailed information about model architecture, training methodology, evaluation results, and applications, please refer to our paper. |