File size: 707 Bytes
4cdf8d6 976330d 4cdf8d6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
# HPSv2-hf
This is a Huggingface CLIPModel flavor of the [HPSv2](https://github.com/tgxs002/HPSv2/) model, which is trained to predict human preferences over AI generated images.
I converted the model weights from the openclip format to huggingface CLIPModel.
The two text and image embeddings were tested to be equal before and after conversion.
You can load the model the same as any huggingface clip model:
```python
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("adams-story/HPSv2-hf")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") # uses the same exact vanilla clip processor
```
All credit goes to the original authors of HPSv2 |