SentenceTransformer based on intfloat/multilingual-e5-large-instruct

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-large-instruct on the d4-embeddings dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Lauther/d4-embeddings-v2.0")
# Run inference
sentences = [
    'PTE SUZANO',
    'What is a Calibration Record?\nA Calibration Record documents the calibration process of a specific equipment tag, ensuring that its measurements remain accurate and reliable. Calibration is a critical process in maintaining measurement precision and compliance with standards.\n\nKey Aspects of a Calibration Record:\n- Calibration Date: The exact date when the calibration was performed, crucial for tracking maintenance schedules.\n- Certification Number: A unique identifier for the calibration certificate, providing traceability and verification of compliance.\n- Range Values: The minimum and maximum measurement values covered during the calibration process.\n- Calibration Status: Indicates whether the calibration was approved or saved for further review.\n- Associated Units: Specifies the measurement units used in calibration (e.g., °C, psi).\n- Associated Equipment Tag ID: Links the calibration record to a specific equipment tag, ensuring traceability of measurement instruments.\nCalibration records play a fundamental role in quality assurance, helping maintain measurement integrity and regulatory compliance.',
    'What is a flow computer?\nA flow computer is a device used in measurement engineering. It collects analog and digital data from flow meters and other sensors.\n\nKey features of a flow computer:\n- It has a unique name, firmware version, and manufacturer information.\n- It is designed to record and process data such as temperature, pressure, and fluid volume (for gases or oils).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

d4-embeddings

  • Dataset: d4-embeddings at 09fb8a5
  • Size: 11,165 training samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 label
    type string string int
    details
    • min: 3 tokens
    • mean: 8.23 tokens
    • max: 19 tokens
    • min: 27 tokens
    • mean: 187.19 tokens
    • max: 406 tokens
    • 0: ~66.20%
    • 1: ~33.80%
  • Samples:
    sentence1 sentence2 label
    Ramal ESVOL - TEVOL (GASVOL 14") What is Equipment?
    An Equipment represents a physical device that may be used within a measurement system. Equipment can be active or inactive and is classified by type, such as transmitters, thermometers, or other measurement-related devices.

    Key Aspects of Equipment:
    - Serial Number: A unique identifier assigned to each equipment unit for tracking and reference.
    - Current State: Indicates whether the equipment is currently in use (ACT) or inactive (INA).
    - Associated Equipment Type: Defines the category of the equipment (e.g., transmitter, thermometer), allowing classification and management.
    Equipment plays a critical role in measurement systems, ensuring accuracy and reliability in data collection and processing.
    0
    Mol (%) CO What is an Equipment Tag?
    An Equipment Tag is a unique label string identifier assigned to equipment that is actively installed and in use within a measurement system. It differentiates between equipment in general (which may be in storage or inactive) and equipment that is currently operational in a system.

    Key Aspects of Equipment Tags:
    - Equipment-Tag: A distinct label or identifier that uniquely marks the equipment in operation.
    - Equipment ID: Links the tag to the corresponding equipment unit.
    - Belonging Measurement System: Specifies which measurement system the tagged equipment is part of.
    - Equipment Type Name: Classifies the equipment (e.g., transmitter, thermometer), aiding in organization and system integration.
    The Equipment Tag is essential for tracking and managing operational equipment within a measurement system, ensuring proper identification, monitoring, and maintenance.
    0
    FQI-4715-1411 What is a flow computer?
    A flow computer is a device used in measurement engineering. It collects analog and digital data from flow meters and other sensors.

    Key features of a flow computer:
    - It has a unique name, firmware version, and manufacturer information.
    - It is designed to record and process data such as temperature, pressure, and fluid volume (for gases or oils).
    0
  • Loss: ContrastiveLoss with these parameters:
    {
        "distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
        "margin": 0.5,
        "size_average": true
    }
    

Evaluation Dataset

d4-embeddings

  • Dataset: d4-embeddings at 09fb8a5
  • Size: 2,392 evaluation samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 label
    type string string int
    details
    • min: 3 tokens
    • mean: 8.22 tokens
    • max: 19 tokens
    • min: 27 tokens
    • mean: 183.06 tokens
    • max: 406 tokens
    • 0: ~66.30%
    • 1: ~33.70%
  • Samples:
    sentence1 sentence2 label
    PTE UTE JUIZ DE FORA (IGREJINHA) B What is uncertainty?
    Uncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.

    Types of uncertainty:
    There are two main types of uncertainty:
    1. Uncertainty of magnitudes (variables):
    - Refers to the uncertainty of specific variables, such as temperature or pressure.
    - It is calculated after calibrating a device or obtained from the equipment manufacturer's manual.
    - This uncertainty serves as a starting point for further calculations related to the equipment.

    2. Uncertainty of the measurement system:
    - Refers to the uncertainty calculated for the overall flow measurement.
    - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.

    Key points:
    - The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of ...
    1
    measure type What is a Calibration Record?
    A Calibration Record documents the calibration process of a specific equipment tag, ensuring that its measurements remain accurate and reliable. Calibration is a critical process in maintaining measurement precision and compliance with standards.

    Key Aspects of a Calibration Record:
    - Calibration Date: The exact date when the calibration was performed, crucial for tracking maintenance schedules.
    - Certification Number: A unique identifier for the calibration certificate, providing traceability and verification of compliance.
    - Range Values: The minimum and maximum measurement values covered during the calibration process.
    - Calibration Status: Indicates whether the calibration was approved or saved for further review.
    - Associated Units: Specifies the measurement units used in calibration (e.g., °C, psi).
    - Associated Equipment Tag ID: Links the calibration record to a specific equipment tag, ensuring traceability of measurement instruments.
    Calibration r...
    0
    daily flow rate What is a Measured Magnitude Value?
    A Measured Magnitude Value represents a DAILY recorded physical measurement of a variable within a monitored fluid. These values are essential for tracking system performance, analyzing trends, and ensuring accurate monitoring of fluid properties.

    Key Aspects of a Measured Magnitude Value:
    - Measurement Date: The timestamp indicating when the measurement was recorded.
    - Measured Value: The daily numeric result of the recorded physical magnitude.
    - Measurement System Association: Links the measured value to a specific measurement system responsible for capturing the data.
    - Variable Association: Identifies the specific variable (e.g., temperature, pressure, flow rate) corresponding to the recorded value.
    Measured magnitude values are crucial for real-time monitoring, historical analysis, and calibration processes within measurement systems.

    Database advices:
    This values also are in historics of a flow computer report. Although, to go directl...
    1
  • Loss: ContrastiveLoss with these parameters:
    {
        "distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
        "margin": 0.5,
        "size_average": true
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 12
  • per_device_eval_batch_size: 12
  • gradient_accumulation_steps: 8
  • weight_decay: 0.01
  • max_grad_norm: 0.5
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 12
  • per_device_eval_batch_size: 12
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 8
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 0.5
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss
0.4296 50 0.1345 -
0.8593 100 0.0512 -
1.2836 150 0.041 0.0051
1.7132 200 0.0344 -
2.1375 250 0.0324 -
2.5671 300 0.0284 0.0038
2.9968 350 0.0296 -
3.4211 400 0.0261 -
3.8507 450 0.0268 0.0035
4.2750 500 0.0244 -
4.7046 550 0.0249 -

Framework Versions

  • Python: 3.11.0
  • Sentence Transformers: 3.4.1
  • Transformers: 4.49.0
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.4.0
  • Datasets: 3.3.2
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

ContrastiveLoss

@inproceedings{hadsell2006dimensionality,
    author={Hadsell, R. and Chopra, S. and LeCun, Y.},
    booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
    title={Dimensionality Reduction by Learning an Invariant Mapping},
    year={2006},
    volume={2},
    number={},
    pages={1735-1742},
    doi={10.1109/CVPR.2006.100}
}
Downloads last month
23
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Lauther/d4-embeddings-v2.0

Quantized
(22)
this model

Dataset used to train Lauther/d4-embeddings-v2.0