File size: 2,977 Bytes
6fcd0d3 61ba839 91db146 61ba839 6fcd0d3 61ba839 6fcd0d3 61ba839 6fcd0d3 91db146 6fcd0d3 91db146 6fcd0d3 61ba839 6fcd0d3 91db146 6fcd0d3 61ba839 6fcd0d3 61ba839 6fcd0d3 61ba839 6fcd0d3 61ba839 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
datasets:
- monology/pile-uncopyrighted
- MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-generation
---
# MiniPLM-Mamba-130M
[paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM)
**MiniPLM-Mamba-130M** is a 130M parameter language model with the [Mamba architecture](https://github.com/state-spaces/mamba) pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework. It uses the [official Qwen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
This model demonstrates the flexibility of the MiniPLM framework in conducting knowledge distillation across model families. The [pre-training corpus](https://huggingface.co/datasets/MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5) refined by Difference Sampling in MiniPLM is open-sourced for reproducibility.
## MiniPLM: Knowledge Distillation for Pre-Training Language Models
Knowledge distillation (KD) is widely used to train small, high-performing student language models (LMs) using large teacher LMs. While effective in fine-tuning, KD during pre-training faces challenges in efficiency, flexibility, and effectiveness. Existing methods either incur high computational costs due to online teacher inference, require tokenization matching between teacher and student LMs, or risk losing the difficulty and diversity of the teacher-generated training data. To address these issues, MiniPLM is proposed, a KD framework for pre-training LMs by refining the training data distribution with the teacher's knowledge. For efficiency, MiniPLM performs offline teacher LM inference, allowing KD for multiple student LMs without adding training-time costs. For flexibility, MiniPLM operates solely on the training corpus, enabling KD across model families. For effectiveness, MiniPLM leverages the differences between large and small LMs to enhance the difficulty and diversity of the training data, helping student LMs acquire versatile and sophisticated knowledge.
<p align='left'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/2BqT0NgkmIXYlktovw9kG.png" width="1000">
</p>
## Evaluation
MiniPLM models achieve better performance given the same computation and scale well across model sizes:
<p align='left'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/EOYzajQcwQFT5PobqL3j0.png" width="1000">
</p>
## Baseline Models
+ [Conventional Pre-Training](https://huggingface.co/MiniLLM/Pretrain-Mamba-130M)
## Citation
```bibtex
@article{miniplm,
title={MiniPLM: Knowledge Distillation for Pre-Training Language Models},
author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
journal={arXiv preprint arXiv:2410.17215},
year={2024}
}
``` |