SPLADE Sparse Encoder

This is a SPLADE sparse retrieval model based on BERT-Tiny (4M) that was trained by distilling a Cross-Encoder on the MSMARCO dataset. The cross-encoder used was ms-marco-MiniLM-L6-v2.

This Tiny SPLADE model beats BM25 by 65.6% on the MSMARCO benchmark. While this model is 15x smaller than Naver's official splade-v3-distilbert, is posesses 80% of it's performance on MSMARCO. This model is small enough to be used without a GPU on a dataset of a few thousand documents.

Performance

The splade models were evaluated on 55 thousand queries and 8.84 million documents from the MSMARCO dataset.

Size (# Params) MRR@10 (MS MARCO dev)
BM25 - 18.0
rasyosef/splade-tiny 4.4M 30.9
rasyosef/splade-mini 11.2M 33.2
naver/splade-v3-distilbert 67.0M 38.7

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SparseEncoder

# Download from the 🤗 Hub
model = SparseEncoder("rasyosef/splade-tiny")
# Run inference
queries = [
    "what do i need to change my name on my license in ma",
]
documents = [
    'Change your name on MA state-issued ID such as driver’s license or MA ID card. All documents you bring to RMV need to be originals or certified copies by the issuing agency. PAPERWORK NEEDED: Proof of legal name change — A court order showing your legal name change. Your Social Security Card with your new legal name change',
    "See below: 1. Get your marriage license. Before you can change your name, you'll need the original (or certified) marriage license with the raised seal and your new last name on it. Call the clerk's office where your license was filed to get copies if one wasn't automatically sent to you. 2. Change your Social Security card.",
    "You'll keep the same number—just your name will be different. Mail in your application to the local Social Security Administration office. You should get your new card within 10 business days. 3. Change your license at the DMV. Take a trip to the local Department of Motor Vehicles office to get a new license with your new last name. Bring every form of identification you can get your hands on—your old license, your certified marriage certificate and, most importantly, your new Social Security card.",
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[16.6297, 13.4552, 10.1923]])

Model Details

Model Description

  • Model Type: SPLADE Sparse Encoder
  • Base model: prajjwal1/bert-tiny
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 30522 dimensions
  • Similarity Function: Dot Product

Model Sources

Full Model Architecture

SparseEncoder(
  (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
  (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)

More

Click to expand

Evaluation

Metrics

Sparse Information Retrieval

Metric Value
dot_accuracy@1 0.4772
dot_accuracy@3 0.793
dot_accuracy@5 0.8964
dot_accuracy@10 0.96
dot_precision@1 0.4772
dot_precision@3 0.2713
dot_precision@5 0.1864
dot_precision@10 0.1009
dot_recall@1 0.4617
dot_recall@3 0.7799
dot_recall@5 0.8874
dot_recall@10 0.9559
dot_ndcg@10 0.7217
dot_mrr@10 0.649
dot_map@100 0.6447
query_active_dims 18.3342
query_sparsity_ratio 0.9994
corpus_active_dims 121.653
corpus_sparsity_ratio 0.996

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,200,000 training samples
  • Columns: query, positive, negative_1, negative_2, and label
  • Approximate statistics based on the first 1000 samples:
    query positive negative_1 negative_2 label
    type string string string string list
    details
    • min: 4 tokens
    • mean: 9.08 tokens
    • max: 35 tokens
    • min: 23 tokens
    • mean: 79.02 tokens
    • max: 192 tokens
    • min: 18 tokens
    • mean: 78.24 tokens
    • max: 230 tokens
    • min: 13 tokens
    • mean: 75.26 tokens
    • max: 230 tokens
    • size: 2 elements
  • Samples:
    query positive negative_1 negative_2 label
    does alzheimer's affect sleep People with Alzheimer’s disease go through many changes, and sleep problems are often some of the most noticeable. Most adults have changes in their sleep patterns as they age. But the problems are more severe and happen more often for people with Alzheimer’s. Could the position you SLEEP in affect your risk of Alzheimer's? People who sleep on their side enable their brain to 'detox' better while they rest. While asleep, brain is hard at work removing toxins that build up in the day. If left to build up, these toxins can cause Alzheimer's and Parkinson's. The Scary Connection Between Snoring and Dementia. For more, visit TIME Health. If you don't snore, you likely know someone who does. Between 19% and 40% of adults snore when they sleep, and that percentage climbs even higher, particularly for men, as we age. [1.407266616821289, 10.169305801391602]
    what is fy in steel design Since the yield strength of the steel is quite clearly defined and controlled, this establishes a very precise reference in structural investigations. An early design decision is that for the yield strength (specified by the Grade of steel used) that is to be used in the design work.Several different grades of steel may be used for large projects, with a minimum grade for ordinary tasks and higher grades for more demanding ones.ost steel used for reinforcement is highly ductile in nature. Its usable strength is its yield strength, as this stress condition initiates such a magnitude of deformation (into the plastic yielding range of the steel), that major cracking will occur in the concrete. fy is the yield point of the material. E is the symbol for Young's Modulus of the material. E can be measured by dividing the elastic stress by the elastic strain.That is, this measurement must be made before the yield point of the material is reached.y is the yield point of the material. E is the symbol for Young's Modulus of the material. E can be measured by dividing the elastic stress by the elastic strain. The longest dimension of the cant. WT is 13'. Using ASTM A992 carbon steel, a WT9x35.5 is at full bending stress and deflection limits. (Fy = 50 ksi). The only information I've found about using stainless for structural design is that type 304 is usually used.This yield strength (Fy) is only equal to 39 or 42ksi.sing ASTM A992 carbon steel, a WT9x35.5 is at full bending stress and deflection limits. (Fy = 50 ksi). The only information I've found about using stainless for structural design is that type 304 is usually used. [0.5, 0.5]
    most common nutritional deficiencies for teenagers : Appendix B: Vitamin and Mineral Deficiencies in the U.S. Some American adults get too little vitamin D, vitamin E, magnesium, calcium, vitamin A and vitamin C (Table B1). More than 40 percent of adults have dietary intakes of vitamin A, C, D and E, calcium and magnesium below the average requirement for their age and gender. Inadequate intake of vitamins and minerals is most common among 14-to-18-year-old teenagers. Adolescent girls have lower nutrient intake than boys (Berner 2014; Fulgoni 2011). But nutrient deficiencies are rare among younger American children; the exceptions are dietary vitamin D and E, for which intake is low for all Americans, and calcium. Common Nutritional Deficiencies. 10 Most Common Nutritional Deficiencies.. Calcium. Calcium is one of the most abundant minerals in your body, yet most people still manage to have a calcium deficiency. Calcium is best know for adding strength to your bones and teeth. 1) Vitamin D–Vitamin D deficiency is common in infants born to mothers with low levels of Vitamin D. Severe deficiency of this nutrient in infancy and early childhood can lead to the development of Rickets, a disease that affects bone formation and causes bow-legs. [3.182860851287842, 7.834665775299072]
  • Loss: SpladeLoss with these parameters:
    {
        "loss": "SparseMarginMSELoss",
        "document_regularizer_weight": 0.2,
        "query_regularizer_weight": 0.3
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.05
  • fp16: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • push_to_hub: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.05
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dot_ndcg@10
1.0 37500 11.4095 0.7103
2.0 75000 10.5305 0.7139
3.0 112500 9.5368 0.7197
4.0 150000 8.717 0.7216
5.0 187500 8.3094 0.7217
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.13
  • Sentence Transformers: 5.0.0
  • Transformers: 4.54.0
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.8.1
  • Datasets: 4.0.0
  • Tokenizers: 0.21.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

SpladeLoss

@misc{formal2022distillationhardnegativesampling,
      title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
      author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
      year={2022},
      eprint={2205.04733},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2205.04733},
}

SparseMarginMSELoss

@misc{hofstätter2021improving,
    title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
    author={Sebastian Hofstätter and Sophia Althammer and Michael Schröder and Mete Sertkan and Allan Hanbury},
    year={2021},
    eprint={2010.02666},
    archivePrefix={arXiv},
    primaryClass={cs.IR}
}

FlopsLoss

@article{paria2020minimizing,
    title={Minimizing flops to learn efficient sparse representations},
    author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
    journal={arXiv preprint arXiv:2004.05665},
    year={2020}
}
Downloads last month
181
Safetensors
Model size
4.42M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rasyosef/splade-tiny

Finetuned
(68)
this model
Quantizations
1 model

Dataset used to train rasyosef/splade-tiny

Collection including rasyosef/splade-tiny

Evaluation results