--- name: DFN2B-CLIP-ViT-L-14-39B-SAFETENSORS base_model: openai/clip-vit-large-patch14 license: apple-amlr pipeline_tag: zero-shot-image-classification tags: - clip - Apple - OpenAI size: - 1710540580 - 1.7 GB tasks: - contrastive image-text - vision language: en papers: - https://arxiv.org/abs/2309.17425 datasets: - CommonPool-12.8B license_link: LICENSE --- > [!IMPORTANT] > Original Model Link : [https://huggingface.co/apple/DFN2B-CLIP-ViT-L-14-39B](https://huggingface.co/apple/DFN2B-CLIP-ViT-L-14-39B) > ``` name: DFN2B-CLIP-ViT-L-14-39B-SAFETENSORS base_model: openai/clip-vit-large-patch14 license: apple-amlr pipeline_tag: zero-shot-image-classification tags: - clip - Apple - OpenAI size: - 1710540580 - 1.7 GB tasks: - contrastive image-text - vision language: en papers: - https://arxiv.org/abs/2309.17425 datasets: - CommonPool-12.8B license_link: LICENSE ``` # DFN2B-CLIP-ViT-L-14-39B-SAFETENSORS A Drop-in replacement for OpenCLIP trained on DFN-2b Data Filtering Network derived from 12.8 uncurated image-text pairs from CommonPool-12.8B