Add pipeline tag and library name
Browse filesThis PR adds the `pipeline_tag` and `library_name` to the model card metadata for better discoverability and clarity. The `pipeline_tag` is set to `image-feature-extraction` reflecting the model's function, and `library_name` is set to `transformers` as the model is used via the Transformers library.
README.md
CHANGED
|
@@ -1,41 +1,42 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
from
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
|
|
|
| 41 |
```
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
pipeline_tag: image-feature-extraction
|
| 4 |
+
library_name: transformers
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
<div align="center">
|
| 8 |
+
<img width="30%" src="figures/logo.png">
|
| 9 |
+
</div>
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
## Introduction
|
| 13 |
+
|
| 14 |
+
**MoonViT** is a Native-resolution Vision Encoder, which is initialized from and continually pre-trained on **SigLIP-SO-400M**.
|
| 15 |
+
To facilitate the standalone use of MoonViT, we have separated the implementation and weights of MoonViT from [moonshotai/Kimi-VL-A3B-Instruct](https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct).
|
| 16 |
+
|
| 17 |
+
If you are interested in the training process of MoonViT, you are welcome to read Paper [Kimi-VL Technical Report](https://huggingface.co/papers/2504.07491).
|
| 18 |
+
|
| 19 |
+
## Example usage
|
| 20 |
+
|
| 21 |
+
```python
|
| 22 |
+
from PIL import Image
|
| 23 |
+
from transformers import AutoModel, AutoImageProcessor
|
| 24 |
+
|
| 25 |
+
model_path = "moonshotai/MoonViT-SO-400M"
|
| 26 |
+
model = AutoModel.from_pretrained(
|
| 27 |
+
model_path,
|
| 28 |
+
torch_dtype="auto",
|
| 29 |
+
device_map="auto",
|
| 30 |
+
trust_remote_code=True,
|
| 31 |
+
)
|
| 32 |
+
processor = AutoImageProcessor.from_pretrained(model_path, trust_remote_code=True)
|
| 33 |
+
|
| 34 |
+
image_path = "./figures/demo.png"
|
| 35 |
+
image = Image.open(image_path)
|
| 36 |
+
|
| 37 |
+
images_processed = processor(image, return_tensors="pt").to(dtype=model.dtype, device=model.device)
|
| 38 |
+
image_features: list = model(images_processed.pixel_values, images_processed.image_grid_hws)
|
| 39 |
+
|
| 40 |
+
print(f"dtype: {image_features[0].dtype}, shape: {image_features[0].shape}")
|
| 41 |
+
# dtype: torch.bfloat16, shape: torch.Size([1092, 4, 1152])
|
| 42 |
```
|