File size: 17,598 Bytes
991036c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
---
language:
- en
base_model:
- nvidia/C-RADIOv2-L
pipeline_tag: image-feature-extraction
---

## Description: <br>

PS3-4K-C-RADIOv2 is a vision encoder that extracts visual features from images of up to 4K resolution.

This model is for research and development only.

### License/Terms of Use: <br> 

The attached custom [NSCLv1](https://huggingface.co/nvidia/PS3-4K-C-RADIOv2/blob/main/LICENSE.md), under which users may use for purposes of conducting non-commercial research activities and non-commercial research publications.

### Deployment Geography:

Global

### Use Case: <br>

The model is used for extracting visual features from high-resolution images.

### Release Date:  <br>

Huggingface [07/26/2025] via [https://huggingface.co/nvidia/PS3-4K-C-RADIOv2] <br> 

## Reference(s):

The model is from the paper [Scaling Vision Pre-Training to 4K Resolution](https://arxiv.org/abs/2503.19903). Useful links:

[![website](https://img.shields.io/badge/website-76b900?style=for-the-badge&logo=safari&labelColor=555555)](https://nvlabs.github.io/PS3/)
[![Arxiv](https://img.shields.io/badge/Arxiv-b31b1b?style=for-the-badge&logo=arxiv&labelColor=555555)](https://arxiv.org/abs/2503.19903)
[![VILA-HD Demo](https://img.shields.io/badge/-VILA--HD_Demo-brightgreen?style=for-the-badge&logo=huggingface&labelColor=555555&color=ff6e00)](https://huggingface.co/spaces/bfshi/VILA-HD-demo)
[![PS3 Models](https://img.shields.io/badge/PS3%20Models%20-ffd21e?style=for-the-badge&logo=huggingface&labelColor=555555)](https://huggingface.co/collections/nvidia/ps3-scaling-vision-pre-training-to-4k-resolution-682d0535b61c07afd45242e9)
[![VILA-HD Models](https://img.shields.io/badge/VILA--HD%20Models%20-ffd21e?style=for-the-badge&logo=huggingface&labelColor=555555)](https://huggingface.co/collections/nvidia/ps3-scaling-vision-pre-training-to-4k-resolution-682d0535b61c07afd45242e9)
[![PS3 Code](https://img.shields.io/badge/PS3%20Code%20-181717?style=for-the-badge&logo=github&labelColor=555555)](https://github.com/NVlabs/PS3)


## Model Architecture:
**Architecture Type:** Neural Network

**Network Architecture:** Vision Transformer designed for high-resolution images 

This model was developed based on [C-RADIOv2](https://huggingface.co/nvidia/C-RADIOv2-L). Please see training designs in the paper.
 

## Input: <br>
**Input Type(s):** Image <br>
**Input Format:** Red, Green, Blue (RGB) <br>
**Input Parameters:** Two-Dimensional (2D) <br>
**Other Properties Related to Input:** Image resolutions up to 3840*3840. <br>

## Output: <br>
**Output Type(s):** Embeddings <br>
**Output Format:** Tensor <br>
**Output Parameters:** One-Dimensional (1D) <br>
**Other Properties Related to Output:** Downstream model required to leverage image features <br>

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br> 

## Software Integration:
**Runtime Engine(s):** 
Not Applicable (N/A) <br> 

**Supported Hardware Microarchitecture Compatibility:** <br>
NVIDIA Ampere <br>
NVIDIA Blackwell <br>
NVIDIA Jetson  <br>
NVIDIA Hopper <br>

**Preferred/Supported Operating System(s):**
Linux <br>
Linux 4 Tegra <br>
QNX  <br>
Windows <br>

## Model Version(s): 

v1.0 - Initial release

## Pre-Trained Models

### PS3 models

| Vision Model    | Max Resolution | Pre-Trained Weights                                                     |
|-----------------|----------------|-------------------------------------------------------------------------|
| PS3-1.5K-SigLIP | 1512 * 1512    | [nvidia/PS3-1.5K-SigLIP](https://huggingface.co/nvidia/PS3-1.5K-SigLIP) |
| PS3-4K-SigLIP   | 3780 * 3780    | [nvidia/PS3-4K-SigLIP](https://huggingface.co/nvidia/PS3-4K-SigLIP)     |
| PS3-1.5K-C-RADIOv2 | 1536 * 1536    | [nvidia/PS3-1.5K-C-RADIOv2](https://huggingface.co/nvidia/PS3-1.5K-C-RADIOv2) |
| PS3-4K-C-RADIOv2   | 3840 * 3840    | [nvidia/PS3-4K-C-RADIOv2](https://huggingface.co/nvidia/PS3-4K-C-RADIOv2)     |
| PS3-1.5K-SigLIP2 | 1512 * 1512    | [nvidia/PS3-1.5K-SigLIP2](https://huggingface.co/nvidia/PS3-1.5K-SigLIP2) |
| PS3-4K-SigLIP2   | 3780 * 3780    | [nvidia/PS3-4K-SigLIP2](https://huggingface.co/nvidia/PS3-4K-SigLIP2)     |
| PS3_Lang-1.5K-SigLIP2 | 1512 * 1512    | [nvidia/PS3_Lang-1.5K-SigLIP2](https://huggingface.co/nvidia/PS3_Lang-1.5K-SigLIP2) |
| PS3_Lang-4K-SigLIP2   | 3780 * 3780    | [nvidia/PS3_Lang-4K-SigLIP2](https://huggingface.co/nvidia/PS3_Lang-4K-SigLIP2)     |

### Performance

![PS3-1.5K-SigLIP2 Performance](assets/ps3_siglip2_performance.png)

## Training Datasets: <br>   

75M images <br>

1 dataset that's built based on:
- SA-1B (https://ai.meta.com/datasets/segment-anything/)
- IDL (https://huggingface.co/datasets/pixparse/idl-wds)
   
Training: 100% <br> 

## Training Dataset:

**Link:**
We used the following dataset during developing PS3:
- SA-1B (https://ai.meta.com/datasets/segment-anything/)
- IDL (https://huggingface.co/datasets/pixparse/idl-wds)

**Data Collection Method by dataset:**  <br>
Automated

**Labeling Method by dataset:**  <br>
Automated

**Properties (Quantity, Dataset Descriptions, Sensor(s)):**  <br>
75M images with resolution up to 4Kx4K.

## Testing & Evaluation Datasets: 
* None <br>


## Performance

### Performance of PS3 models 

See Table 1 in the paper for full results.

## Inference:
**Acceleration Engine:** Not Applicable (N/A) <br>
**Test Hardware:** <br>
The model is tested on NVIDIA A100 GPU.

## Installation

Install through pip to use PS3 out of the box.
```bash
pip install ps3-torch
```

If you would like to make changes to the PS3 code, go to [PS3 repository](https://github.com/NVlabs/PS3), clone the repo, and install in editable mode.
```bash
cd PS3
pip install -e .
```

## Inference - Quick Start

Here we show example usage including
- loading the model
- selectively encoding high-res image based on image saliency (bottom-up selection) and visualizing the selection probabilities
- selectively encoding high-res image based on text prompts (top-down selection) and visualizing the selection probabilities
- formatting the encoded features into (masked) feature maps

### 1. Load Model and Image
```python
from PIL import Image
from ps3 import PS3VisionModel, PS3ImageProcessor

# Load the PS3 model and processor.
vision_model = PS3VisionModel.from_pretrained("nvidia/PS3-4K-SigLIP2")
processor = PS3ImageProcessor.from_pretrained("nvidia/PS3-4K-SigLIP2")
vision_model.cuda().eval()

# You can replace it with your own image.
image = Image.open("assets/test_images/dock.jpg")

# Preprocess the image.
x = processor(image)["pixel_values"][0].unsqueeze(0).cuda()
```

### 2. Encode High-Res Image with Bottom-Up Selection

PS3 can select important high-res patches based on visual saliency and encode those patches.

**You can encode the whole high-res image using PS3.**
```python
outs = vision_model(x, num_look_close="all")
features = outs.last_hidden_state
print(features.shape)  # (1, 88209, 1152)
```
Note the PS3-4K model processes the image at multiple scales: 378 (low-res), 756, 1512, and 3780, and it has a patch size of 14.

Then the number of tokens at each scale is (378/14)^2 = 729, (756/14)^2 = 2916, (1512/14)^2 = 11664, and (3780/14)^2 = 72900.

The output hidden state concatenates all the tokens along sequence dimension.
That gives us 729 + 2916 + 11664 + 72900 = 88209 tokens in total.

**You can encode parts of the high-res image by setting `num_look_close`, i.e., how many times to run the high-res selection and encoding.**
```python
outs = vision_model(x, num_look_close=2)
features = outs.last_hidden_state
print(features.shape)  # (1, 5849, 1152)
```
In this example, it only runs the high-res selection and encoding twice.

Note that PS3 processes at most 2560 high-res patches at a time. Then running high-res selection and encoding twice gives us 2560 * 2 = 5120 high-res tokens. There is also 729 low-res tokens. That gives us 729 + 5120 = 5849 tokens in total.

**You can also decide how many high-res tokens to process by setting `num_token_look_close`.**
```python
outs = vision_model(x, num_token_look_close=3000)
features = outs.last_hidden_state
print(features.shape)  # (1, 3729, 1152)
```
In this example, it only processes 3000 high-res tokens. Note that PS3 only processes 2560 high-res patches at a time. This means it needs to run the high-res selection and encoding twice, with the first time processing 2560 high-res tokens and the second time processing 440 tokens. In the end it outputs 3729 tokens (3000 high-res + 729 low-res).

**Visualize the bottom-up patch selection probabilities.**
```python
############## Helper functions for visiualization ##############

# install cv2, matplotlib, scipy for visualization purpose
os.system("pip install opencv-python matplotlib scipy")
from torchvision import transforms
import numpy as np
import os
import cv2
import matplotlib.pyplot as plt
from scipy.ndimage import gaussian_filter

def create_heatmap_overlay(image, heatmap, alpha=0.4, colormap=plt.cm.jet, sigma=10.0):
    if len(image.shape) == 2:
        image = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)

    smoothed_heatmap = gaussian_filter(heatmap.astype(np.float32), sigma=sigma)
    smoothed_heatmap = (smoothed_heatmap - smoothed_heatmap.min()) / \
                      (smoothed_heatmap.max() - smoothed_heatmap.min())
    colored_heatmap = (colormap(smoothed_heatmap) * 255).astype(np.uint8)
    
    if colored_heatmap.shape[-1] == 4:
        colored_heatmap = colored_heatmap[:, :, :3]
    
    overlay = cv2.addWeighted(image, 1 - alpha, colored_heatmap, alpha, 0)
    return Image.fromarray(overlay)

def save_visualization(selection_probs, image, output_dir):
    os.makedirs(output_dir, exist_ok=True)
    resize_transform = transforms.Resize(image.size[::-1])
    for i, prob in enumerate(selection_probs):
        prob = (prob - prob.min()) / (prob.max() - prob.min() + 1e-6)
        prob = resize_transform(prob)
        prob = prob.squeeze(0).detach().cpu().numpy()
        # overlay the selection probability map on the original image
        overlay = create_heatmap_overlay(np.array(image), prob)
        overlay.save(os.path.join(output_dir, f"selection_prob_scale_{i}.png"))
    image.save(os.path.join(output_dir, f"image.png"))

#################### End of helper functions ####################

selection_probs = outs.selection_probs
print([p.shape for p in selection_probs])  # [(1, 54, 54), (1, 108, 108), (1, 270, 270)]
save_visualization(selection_probs, image, "save_path/bottom_up_selection_probs")
```
`selection_probs` contains the selection probability map for each scale. In this case, the feature map of each scale has shapes of 54x54, 108x108, and 270x270. The selection probability reflects how salient/important each patch is and patches with higher probability are selected first. You can visit the demo for more visualization.

![Bottom-Up Selection Probabilities](assets/example_selection_maps/bottom_up_selection_prob.png)


### 3. Encode High-Res Image with Top-Down Selection

PS3 can also select important high-res patches based on any text prompt.

First of all, load the text model and encode the text prompt.
```python
from ps3 import PS3Tokenizer, PS3TextModel

tokenizer = PS3Tokenizer.from_pretrained("nvidia/PS3-4K-SigLIP2")
text_model = PS3TextModel.from_pretrained("nvidia/PS3-4K-SigLIP2")
text_model.cuda().eval()

text = ["A tall spire with a cross at the top of the building."]
text = tokenizer(text).cuda()
prompt = text_model(text).prompt
```

Then PS3 can select important high-res patches based on the text prompt and encode those patches.
```python
outs = vision_model(x, num_look_close=2, prompt=prompt)
features = outs.last_hidden_state
print(features.shape)  # (1, 5849, 1152)
```

You can visualize the top-down selection probabilities. Usually the regions related to the text prompt have higher selection probabilities.
```python
selection_probs = outs.selection_probs
save_visualization(selection_probs, image, "save_path/top_down_selection_probs_1")
```

![Top-Down Selection Probabilities](assets/example_selection_maps/top_down_selection_prob_1.png)

You can change to another text prompt and see different selection probabilities.
```python
text = ["A green rope on the green and red boat."]
text = tokenizer(text).cuda()
prompt = text_model(text).prompt
outs = vision_model(x, num_look_close=2, prompt=prompt)
selection_probs = outs.selection_probs
save_visualization(selection_probs, image, "save_path/top_down_selection_probs_2")
```

![Top-Down Selection Probabilities](assets/example_selection_maps/top_down_selection_prob_2.png)

### 4. Format the Encoded Features into (Masked) Feature Maps

The features returned above are the concatenation of all the low-res and high-res features.

You can format the features into masked feature maps for each scale.
```python
feature_maps = vision_model.vision_model.format_features_into_feature_maps(outs.last_hidden_state, outs.selection_maps)
print([x.shape for x in feature_maps])  # [(1, 1152, 27, 27), (1, 1152, 54, 54), (1, 1152, 108, 108), (1, 1152, 270, 270)]
```
This will create a masked feature map `feature_maps` which is a list of feature maps (B * C * H * W) for each scale and each feature map contains the actual feature for the selected patches at that scaleand zero vector for the unselected patches.


## Inference instructions

[Quick Start](#quick-start) gives some examples of how to use PS3 to encode an image. Below are more detailed explanations of the arguments of model inference.

```python
class PS3VisionModel(PS3PreTrainedModel):
    ...
    def forward(
        self,
        pixel_values, 
        num_look_close, 
        num_token_look_close=None, 
        prompt=None, 
        gt_selection_maps=None, 
        smooth_selection_prob=False,
        only_select_first_n_scale=None,
        is_global_text=None, 
        pool_gt_token_only=False, 
    ):
    ...
```
`pixel_values`: the input images with shape (B, C, H, W).

`num_look_close`: how many times to run high-res selection and encoding. PS3 selects and processes 2560 patches each time. If set to `all` then it selects all the high-res patches. If set to `0` then PS3 only returns the low-res features. If set to a larger number than what it needs to encode all the high-res patches, then PS3 will clamp it to the max number needed.

`num_token_look_close`: (optinoal) how many high-res patches to select and process. Similar to `num_look_close` but `num_token_look_close` directly specifies the number of high-res tokens instead of number of running high-res encoding.

`prompt`: (optional) the prompt embedding used to select high-res patches. The prompt embedding can be embedding of some text, or some embedding output by an LLM (see the paper). The shape of prompt embedding is (B, C) where B is the batch size (same in `pixel_values`) and C is the embedding dimension (same as PS3 token embedding dimension). If `prompt=None`, then PS3 will select high-res patches based on visual saliency (bottom-up selection).

`gt_selection_maps`: (optional) the ground truth selection maps for the image. It should be a tensor of 0/1 values with shape (B, h, w). Regions with value 1 means they should be selected. When selecting high-res patches, PS3 will interpolate the `gt_selection_maps` to the same size as the feature map at each scale, prioritize selecting the tokens where the value is 1, and if there's still budget for selecting more tokens, it will select the rest based on the original selection probability.

`smooth_selection_prob`: (optional) smooth the selection probability map such that the selected patches won't be distributed too scarcely each time it runs high-res selection. It slightly improves the performance occasinoally when selecting all the patches but usually hurts when selecting parts of the patches.

`only_select_first_n_scale`: (optional) only select the first n high-res scales. For example, for PS3-4K model, if `only_select_first_n_scale=2`, then it only selects and processes scales of 756 and 1512, and ignores the scale of 3780.

`is_global_text`: (optional) only return the pooled low-res feautres. *It will only be used during pre-training.*

`pool_gt_token_only`: (optional) only pool the tokens inside the gt selection regions. *It will only be used during pre-training.*


### Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications.  When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).


## More Details
Please refer to the [PS3 codebase](https://github.com/NVlabs/PS3) for more details.


## Citation

If you find this work useful in your research, please consider citing:

```bibtex
@article{shi2025scaling,
  title={Scaling Vision Pre-Training to 4K Resolution},
  author={Shi, Baifeng and Li, Boyi and Cai, Han and Lu, Yao and Liu, Sifei and Pavone, Marco and Kautz, Jan and Han, Song and Darrell, Trevor and Molchanov, Pavlo and others},
  journal={arXiv preprint arXiv:2503.19903},
  year={2025}
}
```