|
# HPSv2-hf |
|
|
|
This is a Huggingface CLIPModel flavor of the [HPSv2](https://github.com/tgxs002/HPSv2/) model, which is trained to predict human preferences over AI generated images. |
|
|
|
I converted the model weights from the openclip format to huggingface CLIPModel. |
|
The two text and image embeddings were tested to be equal before and after conversion. |
|
|
|
You can load the model the same as any huggingface clip model: |
|
|
|
```python |
|
from transformers import CLIPProcessor, CLIPModel |
|
|
|
model = CLIPModel.from_pretrained("adams-story/HPSv2-hf") |
|
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") # uses the same exact vanilla clip processor |
|
``` |
|
|
|
All credit goes to the original authors of HPSv2 |