CLIP ViT-L/14 model trained on COCO Captions

This is a sentence-transformers model finetuned from openai/clip-vit-large-patch14 on the coco_captions dataset. It maps sentences & paragraphs to a None-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: openai/clip-vit-large-patch14
  • Maximum Sequence Length: None tokens
  • Output Dimensionality: None dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'transformer_task': 'feature-extraction', 'modality_config': {'text': {'method': 'get_text_features', 'method_output_name': None}, 'image': {'method': 'get_image_features', 'method_output_name': None}}, 'module_output_name': 'sentence_embedding', 'architecture': 'CLIPModel'})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/clip-vit-L14-coco")
# Run inference
sentences = [
    'A large desk by a window is neatly arranged.',
    'A long hot dog on a plate on a table.',
    'A lady sitting at an enormous dining table with lots of food.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, -0.0302,  0.1619],
#         [-0.0302,  1.0000,  0.1578],
#         [ 0.1619,  0.1578,  1.0000]])

Evaluation

Metrics

Information Retrieval

Metric coco-eval coco-test
cosine_accuracy@1 0.799 0.776
cosine_accuracy@3 0.968 0.959
cosine_accuracy@5 0.991 0.986
cosine_accuracy@10 0.995 0.995
cosine_precision@1 0.799 0.776
cosine_precision@3 0.3227 0.3197
cosine_precision@5 0.1982 0.1972
cosine_precision@10 0.0995 0.0995
cosine_recall@1 0.799 0.776
cosine_recall@3 0.968 0.959
cosine_recall@5 0.991 0.986
cosine_recall@10 0.995 0.995
cosine_ndcg@10 0.9112 0.8997
cosine_mrr@10 0.8827 0.8674
cosine_map@100 0.8828 0.8678

Training Details

Training Dataset

coco_captions

  • Dataset: coco_captions at a2ed90d
  • Size: 10,000 training samples
  • Columns: image and caption
  • Approximate statistics based on the first 1000 samples:
    image caption
    type PIL.JpegImagePlugin.JpegImageFile string
    details
    • min: 28 characters
    • mean: 52.56 characters
    • max: 156 characters
  • Samples:
    image caption
    A woman wearing a net on her head cutting a cake.
    A woman cutting a large white sheet cake.
    A woman wearing a hair net cutting a large sheet cake.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Evaluation Dataset

coco_captions

  • Dataset: coco_captions at a2ed90d
  • Size: 1,000 evaluation samples
  • Columns: image and caption
  • Approximate statistics based on the first 1000 samples:
    image caption
    type PIL.JpegImagePlugin.JpegImageFile string
    details
    • min: 27 characters
    • mean: 52.45 characters
    • max: 151 characters
  • Samples:
    image caption
    A child holding a flowered umbrella and petting a yak.
    A young man holding an umbrella next to a herd of cattle.
    a young boy barefoot holding an umbrella touching the horn of a cow
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • bf16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • use_cpu: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: True
  • fp16: False
  • half_precision_backend: None
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss coco-eval_cosine_ndcg@10 coco-test_cosine_ndcg@10
-1 -1 - - 0.8902 -
0.0112 7 0.4782 - - -
0.0224 14 0.3108 - - -
0.0336 21 0.2212 - - -
0.0448 28 0.1612 - - -
0.056 35 0.1853 - - -
0.0672 42 0.0811 - - -
0.0784 49 0.0785 - - -
0.0896 56 0.1022 - - -
0.1008 63 0.0927 0.1433 0.9189 -
0.112 70 0.112 - - -
0.1232 77 0.1072 - - -
0.1344 84 0.1272 - - -
0.1456 91 0.1176 - - -
0.1568 98 0.1361 - - -
0.168 105 0.1281 - - -
0.1792 112 0.0961 - - -
0.1904 119 0.1038 - - -
0.2016 126 0.1019 0.1506 0.8929 -
0.2128 133 0.0657 - - -
0.224 140 0.1187 - - -
0.2352 147 0.0752 - - -
0.2464 154 0.2314 - - -
0.2576 161 0.0806 - - -
0.2688 168 0.1243 - - -
0.28 175 0.1179 - - -
0.2912 182 0.1174 - - -
0.3024 189 0.0926 0.1604 0.8907 -
0.3136 196 0.1327 - - -
0.3248 203 0.0861 - - -
0.336 210 0.0677 - - -
0.3472 217 0.1296 - - -
0.3584 224 0.1322 - - -
0.3696 231 0.1555 - - -
0.3808 238 0.0807 - - -
0.392 245 0.1134 - - -
0.4032 252 0.1826 0.1712 0.8840 -
0.4144 259 0.1796 - - -
0.4256 266 0.186 - - -
0.4368 273 0.0971 - - -
0.448 280 0.063 - - -
0.4592 287 0.1344 - - -
0.4704 294 0.072 - - -
0.4816 301 0.1233 - - -
0.4928 308 0.1152 - - -
0.504 315 0.148 0.1565 0.8960 -
0.5152 322 0.0836 - - -
0.5264 329 0.1171 - - -
0.5376 336 0.1433 - - -
0.5488 343 0.0494 - - -
0.56 350 0.1533 - - -
0.5712 357 0.0773 - - -
0.5824 364 0.0921 - - -
0.5936 371 0.0546 - - -
0.6048 378 0.1444 0.1496 0.9001 -
0.616 385 0.0956 - - -
0.6272 392 0.0445 - - -
0.6384 399 0.0939 - - -
0.6496 406 0.1109 - - -
0.6608 413 0.0466 - - -
0.672 420 0.0627 - - -
0.6832 427 0.0857 - - -
0.6944 434 0.058 - - -
0.7056 441 0.1542 0.1443 0.9031 -
0.7168 448 0.0972 - - -
0.728 455 0.0892 - - -
0.7392 462 0.0819 - - -
0.7504 469 0.0838 - - -
0.7616 476 0.0754 - - -
0.7728 483 0.0754 - - -
0.784 490 0.0638 - - -
0.7952 497 0.1006 - - -
0.8064 504 0.0398 0.1429 0.9122 -
0.8176 511 0.1562 - - -
0.8288 518 0.1039 - - -
0.84 525 0.0342 - - -
0.8512 532 0.0467 - - -
0.8624 539 0.0703 - - -
0.8736 546 0.0655 - - -
0.8848 553 0.0216 - - -
0.896 560 0.029 - - -
0.9072 567 0.0588 0.1530 0.9112 -
0.9184 574 0.1145 - - -
0.9296 581 0.0652 - - -
0.9408 588 0.0556 - - -
0.952 595 0.0458 - - -
0.9632 602 0.0085 - - -
0.9744 609 0.0572 - - -
0.9856 616 0.0942 - - -
0.9968 623 0.109 - - -
-1 -1 - - - 0.8997

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.043 kWh
  • Carbon Emitted: 0.012 kg of CO2
  • Hours Used: 0.137 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 5.2.0.dev0
  • Transformers: 4.57.0.dev0
  • PyTorch: 2.8.0+cu128
  • Accelerate: 1.6.0
  • Datasets: 3.6.0
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
9
Safetensors
Model size
0.4B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tomaarsen/clip-vit-L14-coco

Finetuned
(114)
this model

Dataset used to train tomaarsen/clip-vit-L14-coco

Evaluation results