IIC
/

File size: 7,311 Bytes
240d285
 
ad946b8
 
 
 
20b246a
 
 
240d285
d2d3c24
93ed385
c78de6b
11924a9
 
c78de6b
 
11924a9
 
 
 
 
 
b0a2160
4584d59
 
 
 
 
d2d3c24
a7643d6
d2d3c24
 
93ed385
240d285
d2d3c24
240d285
d2d3c24
 
240d285
d2d3c24
240d285
d2d3c24
240d285
d2d3c24
 
 
 
 
240d285
 
d2d3c24
 
 
 
 
 
 
 
7ce2486
 
d2d3c24
 
 
 
 
 
 
 
 
240d285
cf2b388
 
 
d2d3c24
240d285
 
d2d3c24
240d285
d2d3c24
1097844
d2d3c24
 
 
240d285
d2d3c24
 
 
 
 
240d285
d2d3c24
240d285
d2d3c24
 
 
 
 
240d285
d2d3c24
 
 
 
240d285
d2d3c24
240d285
d2d3c24
 
 
240d285
bf8f3f9
 
240d285
d2d3c24
 
 
 
240d285
d2d3c24
240d285
d2d3c24
240d285
d2d3c24
 
 
 
 
240d285
d2d3c24
240d285
d2d3c24
 
 
 
 
 
 
 
 
 
 
 
 
240d285
d2d3c24
240d285
d2d3c24
5e66ab3
e14dfb3
 
c909b5f
5433084
e14dfb3
d2d3c24
a24a7f0
d2d3c24
 
 
 
 
 
 
 
 
 
 
240d285
ad946b8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
---
library_name: transformers
language:
- es
base_model:
- FacebookAI/xlm-roberta-large
license: other
license_name: mel-nc
license_link: https://huggingface.co/IIC/MEL/blob/main/LICENSE
---
# MEL: Legal Spanish Language Model


<div style="display: flex; gap: 10px; flex-wrap: wrap;">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/65c37bd23c88e35892c9c3a5/Rt_1__dD3k2IVP9jhbv39.png" width="200"/>
  <img src="https://cdn-uploads.huggingface.co/production/uploads/65c37bd23c88e35892c9c3a5/HhLdRBQ3aCOQxRxwpYL-q.png" width="200"/>
  <!-- <img src="https://cdn-uploads.huggingface.co/production/uploads/65c37bd23c88e35892c9c3a5/x2zVpxHY5mbjgclkXiyej.png" width="200"/> -->
</div>

<div style="display: flex; gap: 10px; flex-wrap: wrap;">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/65c37bd23c88e35892c9c3a5/ksmpjFjw1klWr-uyOcxHn.png" width="200"/>
  <img src="https://cdn-uploads.huggingface.co/production/uploads/65c37bd23c88e35892c9c3a5/K7t57xHPFpoQec20XxPIY.png" width="200"/>
</div>

<div style="display: flex; gap: 10px; flex-wrap: wrap;">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/65c37bd23c88e35892c9c3a5/kYKA18J_9sJFwtCElsnif.png" width="200"/>
</div>


**Model Name:** MEL (Modelo de Español Legal)  
**Model Type:** Encoder-only Transformer
**Language:** Spanish  
**Domain:** Legal Texts  
**Paper:** [Link to paper](https://arxiv.org/abs/2501.16011)

---

## Overview
MEL is a transformer-based language model designed specifically for processing and understanding Spanish legal texts. Built upon **XLM-RoBERTa-large**, it is further pre-trained on a **large corpus of legal documents**, including the **Boletín Oficial del Estado (BOE), parliamentary transcripts, court rulings, and other legislative texts**. MEL significantly improves the performance of legal NLP tasks, such as **legal text classification** and **named entity recognition (NER)**.

---

## Model Description

### Architecture
- **Base Model:** XLM-RoBERTa-large
- **Training Objective:** Masked Language Modeling (MLM)
- **Pre-training Strategy:** Continued pre-training on Spanish legal texts
- **Context Window:** 512 tokens

### Training Data
MEL is trained on a **curated corpus** of **5.52 million legal texts (~92.7GB)** sourced from:
- **BOE (Boletín Oficial del Estado)**
- **Parliamentary records**
- **Court rulings**
- **Legal statutes**

To ensure high-quality text processing, documents were preprocessed by **removing unwanted characters, normalizing spacing, chunking texts, and filtering non-Spanish content**.

**Cutoff date:** February 2024

### Training Configuration
- **GPU:** NVIDIA A100 80GB PCIe
- **Training Time:** 13.9 days (~7 days per epoch, 2 epochs total)
- **Optimizer:** AdamW (β1=0.9, β2=0.98, ϵ=1e-6)
- **Batch Size:** 16 (Gradient Accumulation: 4, Effective Batch Size: 64)
- **Scheduler:** Cosine Learning Rate Scheduler
- **Warmup Steps:** 8% of total training steps
- **Learning Rate:** 1e-4
- **Weight Decay:** 0.01

<img src="https://cdn-uploads.huggingface.co/production/uploads/65c37bd23c88e35892c9c3a5/au1sYSZBrYQAJGUiFg5V7.png" alt="drawing" width="400"/>


---

## Evaluation
MEL was benchmarked on two datasets:

### **1. Multieurlex (Spanish Legal Texts Classification)**
- **Link:** https://huggingface.co/datasets/coastalcph/multi_eurlex
- **Task:** Multilabel classification of EU laws
- **Performance:**
  - **MEL achieves an F1 score of 0.8025**, outperforming **XLM-RoBERTa-Large (0.7962)**, **Legal-XLM-RoBERTa (0.7933)**, and **RoBERTalex (0.7890)**.

### **2. Private Multiclass Classification Dataset**
- **Task:** Classify legal documents into one of 9 categories
- **Performance:**
  - **MEL achieves an F1 score of 0.9260**, surpassing **XLM-RoBERTa-Large (0.9103)**, **Legal-XLM-RoBERTa (0.8935)**, and **RoBERTalex (0.7007)**.
- **Small Data Learning:** MEL shows better generalization even with limited training data, achieving an **F1 score of 0.8812** in early training compared to the next best **0.7803**.

---

## Model Performance
### **Key Findings**
**Outperforms general multilingual models (XLM-RoBERTa) and other domain-specific models in Spanish legal text classification.**  
**Requires less fine-tuning, demonstrating strong domain adaptation from the pre-training phase.**  
**Shows high sample efficiency, achieving strong results even with limited training data.**

### **Limitations**
**Not evaluated on NER or token-level tasks due to the lack of annotated Spanish legal datasets.**  
**Trained only on Spanish legal texts, so performance in multilingual legal contexts is unknown.**  
**Potential bias in legal terminology due to corpus selection.**  

---

## How to Use
```python
from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("IIC/MEL")
model = AutoModel.from_pretrained("IIC/MEL")

text = "El artículo 45 de la Constitución establece que..."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
```

For fine-tuning on specific legal tasks, use `Trainer` from Hugging Face’s `transformers` library.

---

## Future Work
- Develop **NER models** for **legal entity extraction**.
- Expand dataset to cover **more diverse legal domains** (e.g., contracts, case law, administrative procedures).
- Fine-tune on additional **downstream tasks** (question answering, legal summarization, information retrieval).
- Improve **bias detection and mitigation strategies**.

---

## Citation
If you use MEL, please cite:
```
@misc{sánchez2025mellegalspanishlanguage,
      title={MEL: Legal Spanish Language Model}, 
      author={David Betancur Sánchez and Nuria Aldama García and Álvaro Barbero Jiménez and Marta Guerrero Nieto and Patricia Marsà Morales and Nicolás Serrano Salas and Carlos García Hernán and Pablo Haya Coll and Elena Montiel Ponsoda and Pablo Calleja Ibáñez},
      year={2025},
      eprint={2501.16011},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.16011}, 
}
```

---

## Acknowledgements
This work has received funding from the [Inesdata-project](https://inesdata-project.eu/content/en/index.html) (Infrastructure to Investigate Data Spaces in Distributed
Environments at UPM), a project funded under the UNICO I+D CLOUD call by the Ministry for Digital Transformation
and the Civil Service, in the framework of the recovery plan PRTR financed by the European Union (NextGenerationEU)

**Código del proyecto**: TSI-063100-2022-0001



**Contributors:**
- **David Betancur Sánchez**, Instituto de Ingeniería del Conocimiento (IIC)
- **Nuria Aldama García**, Instituto de Ingeniería del Conocimiento (IIC)
- **Álvaro Barbero Jiménez**, Instituto de Ingeniería del Conocimiento (IIC)
- **Marta Guerrero Nieto**, Instituto de Ingeniería del Conocimiento (IIC)
- **Patricia Marsà Morales**, Instituto de Ingeniería del Conocimiento (IIC)
- **Nicolás Serrano Salas**, Instituto de Ingeniería del Conocimiento (IIC)
- **Carlos García Hernán**, Instituto de Ingeniería del Conocimiento (IIC)
- **Pablo Haya Coll**, Instituto de Ingeniería del Conocimiento (IIC)
- **Elena Montiel Ponsoda**, Universidad Politécnica de Madrid
- **Pablo Calleja Ibáñez**, Universidad Politécnica de Madrid

---