SPLADE-BERT-Mini-Distil

This is a SPLADE sparse retrieval model based on BERT-Mini (11M) that was trained by distilling a Cross-Encoder on the MSMARCO dataset. The cross-encoder used was ms-marco-MiniLM-L6-v2.

This tiny SPLADE model is 6x smaller than Naver's official splade-v3-distilbert while having 85% of it's performance on the MSMARCO benchmark. This model is small enough to be used without a GPU on a dataset of a few thousand documents.

Performance

The splade models were evaluated on 55 thousand queries and 8.84 million documents from the MSMARCO dataset.

Size (# Params) MRR@10 (MS MARCO dev)
BM25 - 18.0
rasyosef/splade-tiny 4.4M 30.9
rasyosef/splade-mini 11.2M 33.2
naver/splade-v3-distilbert 67.0M 38.7

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SparseEncoder

# Download from the 🤗 Hub
model = SparseEncoder("rasyosef/splade-mini")
# Run inference
queries = [
    "research background definition",
]
documents = [
    'Social Sciences. Background research refers to accessing the collection of previously published and unpublished information about a site, region, or particular topic of interest and it is the first step of all good archaeological investigations, as well as that of all writers of any kind of research paper.',
    'This Research Paper Background and Problem Definition and other 62,000+ term papers, college essay examples and free essays are available now on ReviewEssays.com. Autor: dharath1 • July 22, 2014 • Research Paper • 442 Words (2 Pages) • 448 Views.',
    'About the Month of February. February is the 2nd month of the year and has 28 or 29 days. The 29th day is every 4 years during leap year. Season (Northern Hemisphere): Winter. Holidays. Chinese New Year. National Freedom Day. Groundhog Day.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[22.7011, 11.1635,  0.0000]])

Model Details

Model Description

  • Model Type: SPLADE Sparse Encoder
  • Base model: prajjwal1/bert-mini
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 30522 dimensions
  • Similarity Function: Dot Product

Model Sources

Full Model Architecture

SparseEncoder(
  (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
  (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)

More

Click to expand

Evaluation

Metrics

Sparse Information Retrieval

Metric Value
dot_accuracy@1 0.4976
dot_accuracy@3 0.8154
dot_accuracy@5 0.9122
dot_accuracy@10 0.9684
dot_precision@1 0.4976
dot_precision@3 0.2791
dot_precision@5 0.1899
dot_precision@10 0.1018
dot_recall@1 0.4821
dot_recall@3 0.8021
dot_recall@5 0.9035
dot_recall@10 0.9639
dot_ndcg@10 0.7392
dot_mrr@10 0.669
dot_map@100 0.6647
query_active_dims 16.8104
query_sparsity_ratio 0.9994
corpus_active_dims 100.6221
corpus_sparsity_ratio 0.9967

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,350,000 training samples
  • Columns: query, positive, negative, and label
  • Approximate statistics based on the first 1000 samples:
    query positive negative label
    type string string string list
    details
    • min: 4 tokens
    • mean: 8.95 tokens
    • max: 25 tokens
    • min: 15 tokens
    • mean: 79.36 tokens
    • max: 215 tokens
    • min: 21 tokens
    • mean: 78.39 tokens
    • max: 233 tokens
    • size: 1 elements
  • Samples:
    query positive negative label
    what causes protruding stomach Some of the less common causes of Protruding abdomen may include: 1 Constipation. 2 Chronic constipation. 3 Poor muscle tone. Poor muscle tone after 1 childbirth. Lactose intolerance. Food 1 allergies. Food intolerances. 2 Pregnancy. 3 Hernia. Malabsorption. Irritable bowel 1 syndrome. Colonic bacterial fermentation. 2 Gastroparesis. Diabetic gastroparesis. Protruding abdomen: Introduction. Protruding abdomen: abdominal distension. See detailed information below for a list of 56 causes of Protruding abdomen, Symptom Checker, including diseases and drug side effect causes. » Review Causes of Protruding abdomen: Causes Symptom Checker ». Home Diagnostic Testing and Protruding abdomen.
    what is bialys The bialy is not a sub-type of bagel, it’s a thing all to itself. Round with a depressed middle filled with cooked onions and sometimes poppy seeds, it is simply baked (bagels are boiled then baked). Purists prefer them straight up, preferably no more than five hours after being pulled from the oven. Extinction.Like the Lowland gorilla, the cassette tape and Madagascar forest coconuts, the bialy is rapidly becoming extinct. Sure, if you live in New York (where the Jewish tenements on the Lower East Side once overflowed with Eastern European foodstuffs that are now hard to locate), you have a few decent options.he bialy is not a sub-type of bagel, it’s a thing all to itself. Round with a depressed middle filled with cooked onions and sometimes poppy seeds, it is simply baked (bagels are boiled then baked). Purists prefer them straight up, preferably no more than five hours after being pulled from the oven. Extinction. This homemade bialy recipe is even easier to make than a bagel because it doesn’t require boiling prior to baking.his homemade bialy recipe is even easier to make than a bagel because it doesn’t require boiling prior to baking. [5.632390975952148]
    dhow definition Definition of dhow. : an Arab lateen-rigged boat usually having a long overhang forward, a high poop, and a low waist. Freebase(0.00 / 0 votes)Rate this definition: Dhow. Dhow is the generic name of a number of traditional sailing vessels with one or more masts with lateen sails used in the Red Sea and Indian Ocean region. Historians are divided as to whether the dhow was invented by Arabs or Indians. [0.8292264938354492]
  • Loss: SpladeLoss with these parameters:
    {
        "loss": "SparseMarginMSELoss",
        "document_regularizer_weight": 0.2,
        "query_regularizer_weight": 0.3
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • learning_rate: 6e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.05
  • fp16: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • push_to_hub: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 6e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.05
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dot_ndcg@10
1.0 42188 8.6242 0.7262
2.0 84376 7.0404 0.7362
3.0 126564 5.3661 0.7388
4.0 168752 4.4807 0.7392
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.13
  • Sentence Transformers: 5.0.0
  • Transformers: 4.53.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.8.1
  • Datasets: 4.0.0
  • Tokenizers: 0.21.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

SpladeLoss

@misc{formal2022distillationhardnegativesampling,
      title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
      author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
      year={2022},
      eprint={2205.04733},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2205.04733},
}

SparseMarginMSELoss

@misc{hofstätter2021improving,
    title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
    author={Sebastian Hofstätter and Sophia Althammer and Michael Schröder and Mete Sertkan and Allan Hanbury},
    year={2021},
    eprint={2010.02666},
    archivePrefix={arXiv},
    primaryClass={cs.IR}
}

FlopsLoss

@article{paria2020minimizing,
    title={Minimizing flops to learn efficient sparse representations},
    author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
    journal={arXiv preprint arXiv:2004.05665},
    year={2020}
}
Downloads last month
176
Safetensors
Model size
11.2M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rasyosef/splade-mini

Finetuned
(9)
this model
Quantizations
1 model

Dataset used to train rasyosef/splade-mini

Collection including rasyosef/splade-mini

Evaluation results