Lampistero

This is a sentence-transformers model finetuned from jinaai/jina-embeddings-v3 on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: jinaai/jina-embeddings-v3
  • Maximum Sequence Length: 8194 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: es
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (transformer): Transformer(
    (auto_model): XLMRobertaLoRA(
      (roberta): XLMRobertaModel(
        (embeddings): XLMRobertaEmbeddings(
          (word_embeddings): ParametrizedEmbedding(
            250002, 1024, padding_idx=1
            (parametrizations): ModuleDict(
              (weight): ParametrizationList(
                (0): LoRAParametrization()
              )
            )
          )
          (token_type_embeddings): ParametrizedEmbedding(
            1, 1024
            (parametrizations): ModuleDict(
              (weight): ParametrizationList(
                (0): LoRAParametrization()
              )
            )
          )
        )
        (emb_drop): Dropout(p=0.1, inplace=False)
        (emb_ln): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        (encoder): XLMRobertaEncoder(
          (layers): ModuleList(
            (0-23): 24 x Block(
              (mixer): MHA(
                (rotary_emb): RotaryEmbedding()
                (Wqkv): ParametrizedLinearResidual(
                  in_features=1024, out_features=3072, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
                (inner_attn): FlashSelfAttention(
                  (drop): Dropout(p=0.1, inplace=False)
                )
                (inner_cross_attn): FlashCrossAttention(
                  (drop): Dropout(p=0.1, inplace=False)
                )
                (out_proj): ParametrizedLinear(
                  in_features=1024, out_features=1024, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
              )
              (dropout1): Dropout(p=0.1, inplace=False)
              (drop_path1): StochasticDepth(p=0.0, mode=row)
              (norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
              (mlp): Mlp(
                (fc1): ParametrizedLinear(
                  in_features=1024, out_features=4096, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
                (fc2): ParametrizedLinear(
                  in_features=4096, out_features=1024, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
              )
              (dropout2): Dropout(p=0.1, inplace=False)
              (drop_path2): StochasticDepth(p=0.0, mode=row)
              (norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
            )
          )
        )
        (pooler): XLMRobertaPooler(
          (dense): ParametrizedLinear(
            in_features=1024, out_features=1024, bias=True
            (parametrizations): ModuleDict(
              (weight): ParametrizationList(
                (0): LoRAParametrization()
              )
            )
          )
          (activation): Tanh()
        )
      )
    )
  )
  (pooler): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (normalizer): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("csanz91/lampistero_rag_embeddings_2")
# Run inference
sentences = [
    "¿En qué año se demarcó y reconoció la mina 'El Pilar'?",
    "La mina 'El Pilar' se demarcó y reconoció en 1857.",
    'Según la quinta demanda del SOMM, todas compañías mineras debían entregar a todos sus obreros un libramiento de liquidación mensual',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7701
cosine_accuracy@3 0.8926
cosine_accuracy@5 0.9155
cosine_accuracy@10 0.933
cosine_precision@1 0.7701
cosine_precision@3 0.2975
cosine_precision@5 0.1831
cosine_precision@10 0.0933
cosine_recall@1 0.7701
cosine_recall@3 0.8926
cosine_recall@5 0.9155
cosine_recall@10 0.933
cosine_ndcg@10 0.8579
cosine_mrr@10 0.8331
cosine_map@100 0.8343

Information Retrieval

Metric Value
cosine_accuracy@1 0.7695
cosine_accuracy@3 0.889
cosine_accuracy@5 0.9125
cosine_accuracy@10 0.933
cosine_precision@1 0.7695
cosine_precision@3 0.2963
cosine_precision@5 0.1825
cosine_precision@10 0.0933
cosine_recall@1 0.7695
cosine_recall@3 0.889
cosine_recall@5 0.9125
cosine_recall@10 0.933
cosine_ndcg@10 0.8571
cosine_mrr@10 0.8321
cosine_map@100 0.8333

Information Retrieval

Metric Value
cosine_accuracy@1 0.7683
cosine_accuracy@3 0.8865
cosine_accuracy@5 0.9113
cosine_accuracy@10 0.9306
cosine_precision@1 0.7683
cosine_precision@3 0.2955
cosine_precision@5 0.1823
cosine_precision@10 0.0931
cosine_recall@1 0.7683
cosine_recall@3 0.8865
cosine_recall@5 0.9113
cosine_recall@10 0.9306
cosine_ndcg@10 0.8555
cosine_mrr@10 0.8307
cosine_map@100 0.8321

Information Retrieval

Metric Value
cosine_accuracy@1 0.764
cosine_accuracy@3 0.8902
cosine_accuracy@5 0.9083
cosine_accuracy@10 0.93
cosine_precision@1 0.764
cosine_precision@3 0.2967
cosine_precision@5 0.1817
cosine_precision@10 0.093
cosine_recall@1 0.764
cosine_recall@3 0.8902
cosine_recall@5 0.9083
cosine_recall@10 0.93
cosine_ndcg@10 0.8535
cosine_mrr@10 0.8283
cosine_map@100 0.8296

Information Retrieval

Metric Value
cosine_accuracy@1 0.7447
cosine_accuracy@3 0.8769
cosine_accuracy@5 0.9028
cosine_accuracy@10 0.9215
cosine_precision@1 0.7447
cosine_precision@3 0.2923
cosine_precision@5 0.1806
cosine_precision@10 0.0922
cosine_recall@1 0.7447
cosine_recall@3 0.8769
cosine_recall@5 0.9028
cosine_recall@10 0.9215
cosine_ndcg@10 0.8403
cosine_mrr@10 0.8134
cosine_map@100 0.8149

Information Retrieval

Metric Value
cosine_accuracy@1 0.7103
cosine_accuracy@3 0.8491
cosine_accuracy@5 0.8781
cosine_accuracy@10 0.8998
cosine_precision@1 0.7103
cosine_precision@3 0.283
cosine_precision@5 0.1756
cosine_precision@10 0.09
cosine_recall@1 0.7103
cosine_recall@3 0.8491
cosine_recall@5 0.8781
cosine_recall@10 0.8998
cosine_ndcg@10 0.8119
cosine_mrr@10 0.7829
cosine_map@100 0.7851

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 14,907 training samples
  • Columns: query and answer
  • Approximate statistics based on the first 1000 samples:
    query answer
    type string string
    details
    • min: 9 tokens
    • mean: 26.09 tokens
    • max: 66 tokens
    • min: 4 tokens
    • mean: 34.02 tokens
    • max: 405 tokens
  • Samples:
    query answer
    ¿Qué tipos de palas se utilizan para cargar el carbón y el mineral? Se utiliza una pala convencional y una pala hidráulica, esta última descarga sobre un páncer, puede hacerlo lateralmente y se desplaza sobre ruedas u oruga.
    Tras el cierre de la tejería de Florencio Salvador, ¿de dónde procedieron finalmente los ladrillos para las doscientas diez viviendas construidas en Utrillas? Los ladrillos y material para las doscientas diez viviendas construidas en Utrillas procedieron finalmente de Letux, Zaragoza .
    ¿Cuál es el formato de los juegos infantiles que se están preparando para el verano en Escucha en 2021? Los juegos infantiles que se están preparando para el verano en Escucha en 2021 están en formato revista.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 32
  • learning_rate: 2e-05
  • num_train_epochs: 8
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 32
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 8
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_1024_cosine_ndcg@10 dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
1.0 8 - 0.7841 0.7835 0.7836 0.7791 0.7665 0.7226
1.2747 10 58.1187 - - - - - -
2.0 16 - 0.8348 0.8366 0.8345 0.8301 0.8184 0.7861
2.5494 20 24.4181 - - - - - -
3.0 24 - 0.8521 0.8504 0.8503 0.8457 0.8319 0.8007
3.8240 30 16.1488 - - - - - -
4.0 32 - 0.8561 0.8548 0.8555 0.8509 0.8387 0.8073
5.0 40 13.4897 0.8585 0.8556 0.8545 0.8528 0.8397 0.8111
6.0 48 - 0.8578 0.8563 0.8550 0.8535 0.8410 0.8110
6.2747 50 13.7469 - - - - - -
7.0 56 - 0.8579 0.8571 0.8555 0.8535 0.8403 0.8119

Framework Versions

  • Python: 3.12.10
  • Sentence Transformers: 4.1.0
  • Transformers: 4.51.3
  • PyTorch: 2.7.0+cu126
  • Accelerate: 1.7.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
93
Safetensors
Model size
572M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for csanz91/lampistero_rag_embeddings_2

Finetuned
(27)
this model

Evaluation results