File size: 5,093 Bytes
8950012 eacbf3c 8950012 eacbf3c 8950012 eacbf3c 8950012 52cbe46 8950012 eacbf3c 8950012 eacbf3c 8950012 eacbf3c 8950012 eacbf3c 8950012 eacbf3c 8950012 eacbf3c 8950012 eacbf3c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
base_model:
- internlm/internlm2-chat-1_8b
language:
- multilingual
library_name: transformers
license: mit
pipeline_tag: image-text-to-text
tags:
- internvl
- vision
- ocr
- custom_code
- moe
base_model_relation: merge
---
# Mono-InternVL-2B-S1-1
This repository contains the Mono-InternVL-2B model after **S1.1 concept learning**, as part of the work presented in [Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models](https://huggingface.co/papers/2507.12566).
Please refer to our [**project page**](https://internvl.github.io/blog/2024-10-10-Mono-InternVL/) and [**GitHub repository**](https://github.com/OpenGVLab/mono-internvl) for full introduction, code, and usage instructions.
**Mono-InternVL** is a family of monolithic multimodal large language models (MLLMs) that integrates visual encoding and language decoding into a single LLM, aiming for cheaper and faster inference. It addresses challenges of unstable optimization and catastrophic forgetting by embedding a new visual parameter space into a pre-trained LLM, enabling stable learning of visual knowledge via delta tuning.
### โจ Key Highlights
- **Monolithic Architecture**: Integrates visual encoding and language decoding into a single LLM, simplifying the model structure.
- **Endogenous Visual Pre-training (EViP++)**: Features an innovative pre-training strategy that maximizes visual capabilities through progressive learning and incorporates additional visual attention experts.
- **Efficiency**: Significantly reduces training and inference costs, including a fused CUDA kernel for faster MoE operations, while maintaining competitive performance.
### ๐ Performance
Mono-InternVL achieves competitive performance across various multimodal benchmarks, often outperforming other monolithic MLLMs. Compared to its modular counterpart, InternVL-1.5, Mono-InternVL-1.5 achieves similar multimodal performance while reducing first-token latency by up to 69%.
Below is a summary of some key benchmarks:
| Benchmark | Mono-InternVL-2B | Mini-InternVL-2B-1-5 | Emu3 |
| :------------------- | :--------------: | :------------------: | :---: |
| Type | Monolithic | Modular | Monolithic |
| #Activated Params | 1.8B | 2.2B | 8B |
| **MMVet** | 40.1 | 39.3 | 37.2 |
| **OCRBench** | 767 | 654 | 687 |
| **MathVista** | 45.7 | 41.1 | โ |
| **TextVQA** | 72.6 | 70.5 | 64.7 |
| **DocVQA** | 80.0 | 85.0 | 76.3 |
*(For full performance details, please refer to the [paper](https://huggingface.co/papers/2507.12566) and [project page](https://internvl.github.io/blog/2024-10-10-Mono-InternVL/))*
### ๐ Quick Inference (using Transformers)
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
# Load model and tokenizer (ensure transformers==4.37.2)
path = 'OpenGVLab/Mono-InternVL-2B'
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
# Load image (ensure image is preprocessed if needed as per GitHub instructions)
# For simplicity, using a dummy image path here.
# Refer to the GitHub repo for `load_image` utility function.
# pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = None # Replace with actual image tensor
generation_config = dict(max_new_tokens=1024, do_sample=True)
# Example: single-image single-round conversation
question = '<image>
Please describe the image shortly.'
# response = model.chat(tokenizer, pixel_values, question, generation_config)
# print(f'User: {question}
Assistant: {response}')
# Example: pure-text conversation
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}
Assistant: {response}')
```
## Citation
If you find this project useful in your research, please consider citing the related papers:
```BibTeX
@article{mono_internvl_v1,
title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Liu, Jiawen and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
journal={arXiv preprint arXiv:2410.08202},
year={2024}
}
@article{mono_internvl_v1.5,
title={Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models},
author={Luo, Gen and Dou, Wenhan and Li, Wenhao and Wang, Zhaokai and Yang, Xue and Tian, Changyao and Li, Hao and Wang, Weiyun and Wang, Wenhai and Zhu, Xizhou and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2507.12566},
year={2025}
}
``` |