Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,112 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- fill-mask
|
| 5 |
+
tags:
|
| 6 |
+
- pretraining
|
| 7 |
+
- encoder
|
| 8 |
+
- multilingual
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# MMBERT Decay Phase Data
|
| 12 |
+
|
| 13 |
+
[](https://opensource.org/licenses/MIT)
|
| 14 |
+
[](https://arxiv.org/abs/2509.06888)
|
| 15 |
+
[](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4)
|
| 16 |
+
[](https://github.com/jhu-clsp/mmBERT)
|
| 17 |
+
|
| 18 |
+
> **Phase 3 of 3**: Annealed language learning decay phase (100B tokens) with massive multilingual expansion to 1833 languages.
|
| 19 |
+
|
| 20 |
+
## π Data Composition
|
| 21 |
+
NOTE: there are multiple decay data mixtures: this mixture described below is the Decay-Cont mixture. However, the data in this repository is the Decay-Eng. If you are interested in the others, please let me know so I can prioritize it.
|
| 22 |
+
|
| 23 |
+
| Data Source | Tokens (B) | Percentage | Description |
|
| 24 |
+
|:------------|:-----------|:-----------|:------------|
|
| 25 |
+
| FineWeb2 | 78.5 | 76.0% | High-quality multilingual web crawl data |
|
| 26 |
+
| Wikipedia (MegaWika) | 9.5 | 9.2% | Encyclopedia articles (1833 languages) |
|
| 27 |
+
| Arxiv | 3.3 | 3.2% | Academic preprints |
|
| 28 |
+
| Textbooks (ProLong) | 3.1 | 3.0% | Educational content |
|
| 29 |
+
| Code (ProLong) | 2.8 | 2.7% | Code repositories and files |
|
| 30 |
+
| Books | 2.2 | 2.1% | Literature and reference books |
|
| 31 |
+
| DCLM (Dolmino) | 2.0 | 2.0% | High-quality English web data |
|
| 32 |
+
| Tulu Flan | 1.0 | 1.0% | Instruction-following data |
|
| 33 |
+
| Starcoder | 0.5 | 0.5% | Code repositories |
|
| 34 |
+
| Dolmino Math | 0.5 | 0.5% | Mathematical content |
|
| 35 |
+
| **Total** | **103.3** | **100.0%** | Optimized for rapid language acquisition |
|
| 36 |
+
|
| 37 |
+
## π Massive Language Coverage
|
| 38 |
+
|
| 39 |
+
This phase dramatically expands language coverage to **1833 languages**, implementing the novel **Cascading Annealed Language Learning (ALL)** approach:
|
| 40 |
+
|
| 41 |
+
- **Temperature Schedule**: Ο=0.3 (most uniform sampling)
|
| 42 |
+
- **Low-resource Focus**: Includes 1723 new languages with minimal data
|
| 43 |
+
- **Rapid Learning**: Demonstrates 68% performance improvement on Tigray and 26% on Faroese
|
| 44 |
+
- **Script Diversity**: Covers virtually all writing systems in FineWeb2
|
| 45 |
+
|
| 46 |
+
### Key Innovation: Annealed Language Learning
|
| 47 |
+
|
| 48 |
+
Rather than training on all languages simultaneously, MMBERT uses a cascading approach:
|
| 49 |
+
1. **Phase 1**: 60 high-resource languages (Ο=0.7)
|
| 50 |
+
2. **Phase 2**: 110 languages including mid-resource (Ο=0.5)
|
| 51 |
+
3. **Phase 3**: 1833 languages with focus on low-resource (Ο=0.3)
|
| 52 |
+
|
| 53 |
+
This enables rapid learning of new languages while maintaining performance on high-resource ones.
|
| 54 |
+
|
| 55 |
+
## βοΈ Key Features
|
| 56 |
+
|
| 57 |
+
- **Ultra-low Masking**: 5% mask rate for optimal learning efficiency
|
| 58 |
+
- **Model Merging**: Three decay variants (English-focused, 110-lang, 1833-lang) merged using TIES. This is the English focused version.
|
| 59 |
+
- **Quality Focus**: Emphasizes highest-quality data sources
|
| 60 |
+
|
| 61 |
+
## π Usage
|
| 62 |
+
|
| 63 |
+
For decay phase training, see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT
|
| 64 |
+
|
| 65 |
+
### Direct Access
|
| 66 |
+
|
| 67 |
+
```python
|
| 68 |
+
from streaming import StreamingDataset
|
| 69 |
+
|
| 70 |
+
# Load the streaming dataset
|
| 71 |
+
dataset = StreamingDataset(
|
| 72 |
+
remote='https://huggingface.co/datasets/jhu-clsp/mmbert-decay',
|
| 73 |
+
local='/tmp/mmbert-decay-data',
|
| 74 |
+
shuffle=True
|
| 75 |
+
)
|
| 76 |
+
|
| 77 |
+
# Access samples
|
| 78 |
+
for sample in dataset:
|
| 79 |
+
text = sample['text']
|
| 80 |
+
# Process your data...
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
## π― Performance Impact
|
| 84 |
+
|
| 85 |
+
The decay phase demonstrates remarkable efficiency in low-resource language learning:
|
| 86 |
+
- **Tigray (TiQuAD)**: 68% improvement (12.1 F1 points) from including the language
|
| 87 |
+
- **Faroese (FoQA)**: 26% improvement (15.4 F1 points)
|
| 88 |
+
- **SOTA Performance**: Can even outperforms GPT-4o, Gemini 2.5 Pro
|
| 89 |
+
- **Rapid Acquisition**: Significant gains with only 100B tokens of exposure
|
| 90 |
+
|
| 91 |
+
## π Related Resources
|
| 92 |
+
|
| 93 |
+
- **Models**: [mmBERT Model Suite](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4)
|
| 94 |
+
- **Phase 1**: [Pre-training Data](https://huggingface.co/datasets/jhu-clsp/mmbert-pretrain-p1-fineweb2-langs) (2.3T tokens)
|
| 95 |
+
- **Phase 2**: [Mid-training Data](https://huggingface.co/datasets/jhu-clsp/mmbert-midtraining) (600B tokens)
|
| 96 |
+
- **Checkpoints**: [Training Checkpoints](https://huggingface.co/datasets/jhu-clsp/mmbert-checkpoints)
|
| 97 |
+
- **Paper**: [Arxiv link](https://arxiv.org/abs/2509.06888)
|
| 98 |
+
- **Code**: [GitHub Repository](https://github.com/jhu-clsp/mmBERT)
|
| 99 |
+
|
| 100 |
+
## Citation
|
| 101 |
+
|
| 102 |
+
```bibtex
|
| 103 |
+
@misc{marone2025mmbertmodernmultilingualencoder,
|
| 104 |
+
title={mmBERT: A Modern Multilingual Encoder with Annealed Language Learning},
|
| 105 |
+
author={Marc Marone and Orion Weller and William Fleshman and Eugene Yang and Dawn Lawrie and Benjamin Van Durme},
|
| 106 |
+
year={2025},
|
| 107 |
+
eprint={2509.06888},
|
| 108 |
+
archivePrefix={arXiv},
|
| 109 |
+
primaryClass={cs.CL},
|
| 110 |
+
url={https://arxiv.org/abs/2509.06888},
|
| 111 |
+
}
|
| 112 |
+
```
|