metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:8622
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: What is the purpose of geotechnical exploration at the PSEG Site?
sentences:
- >-
The purposes of the PSEG Site geotechnical exploration and testing were
to: - Obtain new data to meet current NRC and vendor design control
document Tier 1 site characteristics requirements as appropriate for an
ESPA - Confirm and demonstrate the applicability of the existing field
data from the previous site exploration work for the existing nuclear
plants
- >-
Geotechnical evaluations at the PSEG Site included assessing soil
stratigraphy and groundwater conditions to identify potential risks and
the suitability of the site for construction, focusing on the mechanical
properties of subsurface materials.
- >-
Table 3.8-3 Illinois Inventory of Archaeological Sites Entries within
6-miles of DNPS (Sheet 2 of 28) lists various archaeological sites and
their statuses relevant to the regulatory considerations for the plant.
- source_sentence: >-
The analysis of the identified nuclides can greatly aid in determining the
safety measures necessary for nuclear facilities.
sentences:
- IDENTIFIED NUCLIDES
- 'Peak Analysis Performed on: 5/29/2019 6:14:38 AM'
- >-
10 CFR Part 50, Appendix H, “Reactor Vessel Material Surveillance
Program Requirements,” requires that peak neutron fluence at the end of
the design life of the vessel will not exceed 1.0 x 10¹⁷ n/cm² (E > 1.0
MeV), or that reactor vessel beltline materials be monitored by a
surveillance program.
- source_sentence: >-
The NRC assessment includes evaluations to determine the impact of
specific events on safety measures.
sentences:
- >-
The staff noted that the licensee performed a root cause evaluation with
an extent of condition and extent of cause evaluation following the May
25 scram.
- >-
In assessing operational events, it is crucial to differentiate between
various types of occurrences to ensure comprehensive safety evaluations
encompass all relevant aspects, including human factors and procedural
adherence.
- >-
The reactor trip breaker indicating lights provide crucial information
on the status of the reactor trip system during an Anticipated Transient
Without Scram (ATWS).
- source_sentence: >-
Each reactor building isolation valve must remain effective during various
operational modes.
sentences:
- >-
The RHRSW System functions to remove heat from the RHR System and
Emergency Equipment Cooling Water (EECW) System components by pumping
water from Wheeler Reservoir through the Residual Heat Removal (RHR)
heat exchangers and Emergency Equipment Cooling Water (EECW) System
components and discharges back to Wheeler Reservoir.
- Each reactor building isolation valve shall be OPERABLE.
- Separate Condition entry is allowed for each penetration flow path.
- source_sentence: What is the purpose of the Rapid Borate Stop Valve in Reactor Control?
sentences:
- >-
CLOSE the Air Supply Isolation Valve, 12CV160 A/S, AIR SUPPLY FOR
12CV160.
- >-
The NRC staff is reviewing Westinghouse’s license renewal application
and preparing an environmental impact statement (EIS) in accordance with
the National Environmental Policy Act of 1969.
- >-
Locates and discusses opening 1CV175, Rapid Borate Stop Valve by
disengaging clutch and rotating handwheel (counterclockwise).
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on BAAI/bge-base-en-v1.5
results:
- task:
type: triplet
name: Triplet
dataset:
name: validation
type: validation
metrics:
- type: cosine_accuracy
value: 0.9397031664848328
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.9387755393981934
name: Cosine Accuracy
SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'What is the purpose of the Rapid Borate Stop Valve in Reactor Control?',
'Locates and discusses opening 1CV175, Rapid Borate Stop Valve by disengaging clutch and rotating handwheel (counterclockwise).',
'CLOSE the Air Supply Isolation Valve, 12CV160 A/S, AIR SUPPLY FOR 12CV160.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Triplet
- Dataset:
validation
- Evaluated with
TripletEvaluator
Metric | Value |
---|---|
cosine_accuracy | 0.9397 |
Triplet
- Dataset:
validation
- Evaluated with
TripletEvaluator
Metric | Value |
---|---|
cosine_accuracy | 0.9388 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 8,622 training samples
- Columns:
sentence_0
,sentence_1
, andsentence_2
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 sentence_2 type string string string details - min: 5 tokens
- mean: 14.64 tokens
- max: 41 tokens
- min: 5 tokens
- mean: 43.24 tokens
- max: 512 tokens
- min: 3 tokens
- mean: 31.29 tokens
- max: 512 tokens
- Samples:
sentence_0 sentence_1 sentence_2 What is the concentration of H-3 in µCi/ml?
H-3 has a concentration of 8.5E-10 µCi/ml.
The isotope Rb-89 has a release rate of 4.7E-05 Ci/yr.
gamma calibration procedures
Gamma Calibration: GM detectors positioned perpendicular to source for M-44-9 in which the front of probe faces source.
Effective calibration of GM detectors is crucial for accurate measurement. Procedures often involve using a consistent radiation source and monitoring the response of various detector models across multiple energy levels.
What is the function of the TAP-A program in thermal analysis?
The TAP-A program is applicable to both “transient and steady-state heat transfer in multidimensional systems having arbitrary geometric configurations, boundary conditions, initial conditions, and physical properties.
The wall panel model for the crane wall is 48 ft long with 8 axial stations each 6 ft in length.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 32per_device_eval_batch_size
: 32num_train_epochs
: 5fp16
: Truemulti_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 32per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss | validation_cosine_accuracy |
---|---|---|---|
0.5556 | 200 | - | 0.9272 |
1.0 | 360 | - | 0.9318 |
1.1111 | 400 | - | 0.9309 |
1.3889 | 500 | 0.5286 | - |
1.6667 | 600 | - | 0.9355 |
2.0 | 720 | - | 0.9378 |
2.2222 | 800 | - | 0.9374 |
2.7778 | 1000 | 0.2751 | 0.9397 |
3.0 | 1080 | - | 0.9397 |
0.7407 | 200 | - | 0.9374 |
1.0 | 270 | - | 0.9369 |
1.4815 | 400 | - | 0.9374 |
1.8519 | 500 | 0.2128 | - |
2.0 | 540 | - | 0.9383 |
2.2222 | 600 | - | 0.9388 |
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.2.2
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}