File size: 1,497 Bytes
241bfc5
 
 
 
 
 
 
374c287
 
241bfc5
 
374c287
241bfc5
 
 
374c287
241bfc5
77f7a16
 
69eb9b6
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
license: mit
tags:
- vision
- vision-language-model
- contrastive learning
- self-supervised learning
pipeline_tag: image-text-to-text
library_name: transformers
---

**[CVPR 2025] COSMOS Model**

Authors: [Sanghwan Kim](https://kim-sanghwan.github.io/), [Rui Xiao](https://www.eml-munich.de/people/rui-xiao), [Mariana-Iuliana Georgescu](https://lilygeorgescu.github.io/), [Stephan Alaniz](https://www.eml-munich.de/people/stephan-alaniz), [Zeynep Akata](https://www.eml-munich.de/people/zeynep-akata) 

COSMOS is introduced in the paper [COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training](https://arxiv.org/abs/2412.01814). COSMOS is trained in self-supervised learning framework with multi-modal augmentation and cross-attention module. It outperforms CLIP-based models trained on larger datasets in visual perception and contextual understanding tasks. COSMOS also achieves strong performance in downstream tasks including zero-shot image-text retrieval, classification, and semantic segmentation.

**Usage**

Please refer to our [Github repo](https://github.com/ExplainableML/cosmos) for detailed usage.

**Citation**

If you find our work useful, please consider citing:

```bibtex
@article{kim2024cosmos,
  title={COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training},
  author={Kim, Sanghwan and Xiao, Rui and Georgescu, Mariana-Iuliana and Alaniz, Stephan and Akata, Zeynep},
  journal={arXiv preprint arXiv:2412.01814},
  year={2024}
}
```