OpenAI-Clip: Optimized for Qualcomm Devices

Contrastive Language-Image Pre-Training (CLIP) uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features can then be used for a variety of zero-shot learning tasks.

This is based on the implementation of OpenAI-Clip found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.

Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.

Getting Started

There are two ways to deploy this model on your device:

Option 1: Download Pre-Exported Models

Below are pre-exported model assets ready for deployment.

Runtime Precision Chipset SDK Versions Download
ONNX float Universal QAIRT 2.37, ONNX Runtime 1.23.0 Download
QNN_DLC float Universal QAIRT 2.42 Download
TFLITE float Universal QAIRT 2.42, TFLite 2.17.0 Download

For more device-specific assets and performance metrics, visit OpenAI-Clip on Qualcomm® AI Hub.

Option 2: Export with Custom Configurations

Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:

  • Custom weights (e.g., fine-tuned checkpoints)
  • Custom input shapes
  • Target device and runtime configurations

This option is ideal if you need to customize the model beyond the default configuration provided here.

See our repository for OpenAI-Clip on GitHub for usage instructions.

Model Details

Model Type: Model_use_case.image_classification

Model Stats:

  • Model checkpoint: ViT-B/16
  • Image input resolution: 224x224
  • Text context length: 77
  • Number of parameters: 150M
  • Model size (float): 571 MB

Performance Summary

Model Runtime Precision Chipset Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit
OpenAI-Clip ONNX float Snapdragon® X Elite 22.459 ms 294 - 294 MB NPU
OpenAI-Clip ONNX float Snapdragon® 8 Gen 3 Mobile 15.269 ms 0 - 796 MB NPU
OpenAI-Clip ONNX float Qualcomm® QCS8550 (Proxy) 22.212 ms 0 - 323 MB NPU
OpenAI-Clip ONNX float Qualcomm® QCS9075 25.758 ms 0 - 4 MB NPU
OpenAI-Clip ONNX float Snapdragon® 8 Elite For Galaxy Mobile 12.318 ms 1 - 713 MB NPU
OpenAI-Clip ONNX float Snapdragon® 8 Elite Gen 5 Mobile 10.095 ms 1 - 661 MB NPU
OpenAI-Clip QNN_DLC float Snapdragon® X Elite 18.808 ms 1 - 1 MB NPU
OpenAI-Clip QNN_DLC float Snapdragon® 8 Gen 3 Mobile 12.624 ms 0 - 550 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® QCS8275 (Proxy) 55.883 ms 1 - 507 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® QCS8550 (Proxy) 17.922 ms 1 - 593 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® SA8775P 20.876 ms 1 - 504 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® QCS9075 20.902 ms 1 - 3 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® QCS8450 (Proxy) 21.094 ms 0 - 501 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® SA7255P 55.883 ms 1 - 507 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® SA8295P 22.195 ms 0 - 492 MB NPU
OpenAI-Clip QNN_DLC float Snapdragon® 8 Elite For Galaxy Mobile 10.588 ms 1 - 515 MB NPU
OpenAI-Clip QNN_DLC float Snapdragon® 8 Elite Gen 5 Mobile 8.462 ms 0 - 485 MB NPU
OpenAI-Clip TFLITE float Snapdragon® 8 Gen 3 Mobile 11.008 ms 0 - 562 MB NPU
OpenAI-Clip TFLITE float Qualcomm® QCS8275 (Proxy) 52.168 ms 0 - 512 MB NPU
OpenAI-Clip TFLITE float Qualcomm® QCS8550 (Proxy) 15.689 ms 0 - 4 MB NPU
OpenAI-Clip TFLITE float Qualcomm® SA8775P 18.638 ms 0 - 509 MB NPU
OpenAI-Clip TFLITE float Qualcomm® QCS9075 20.357 ms 0 - 294 MB NPU
OpenAI-Clip TFLITE float Qualcomm® QCS8450 (Proxy) 20.361 ms 0 - 503 MB NPU
OpenAI-Clip TFLITE float Qualcomm® SA7255P 52.168 ms 0 - 512 MB NPU
OpenAI-Clip TFLITE float Qualcomm® SA8295P 21.567 ms 0 - 495 MB NPU
OpenAI-Clip TFLITE float Snapdragon® 8 Elite For Galaxy Mobile 9.059 ms 0 - 526 MB NPU
OpenAI-Clip TFLITE float Snapdragon® 8 Elite Gen 5 Mobile 7.023 ms 0 - 497 MB NPU

License

  • The license for the original implementation of OpenAI-Clip can be found here.

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for qualcomm/OpenAI-Clip