SentenceTransformer based on thenlper/gte-small
This is a sentence-transformers model finetuned from thenlper/gte-small. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: thenlper/gte-small
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sucharush/gte_MNR")
# Run inference
sentences = [
'Does systemic administration of urocortin after intracerebral hemorrhage reduce neurological deficits and neuroinflammation in rats?',
"Intracerebral hemorrhage (ICH) remains a serious clinical problem lacking effective treatment. Urocortin (UCN), a novel anti-inflammatory neuropeptide, protects injured cardiomyocytes and dopaminergic neurons. Our preliminary studies indicate UCN alleviates ICH-induced brain injury when administered intracerebroventricularly (ICV). The present study examines the therapeutic effect of UCN on ICH-induced neurological deficits and neuroinflammation when administered by the more convenient intraperitoneal (i.p.) route. ICH was induced in male Sprague-Dawley rats by intrastriatal infusion of bacterial collagenase VII-S or autologous blood. UCN (2.5 or 25 μg/kg) was administered i.p. at 60 minutes post-ICH. Penetration of i.p. administered fluorescently labeled UCN into the striatum was examined by fluorescence microscopy. Neurological deficits were evaluated by modified neurological severity score (mNSS). Brain edema was assessed using the dry/wet method. Blood-brain barrier (BBB) disruption was assessed using the Evans blue assay. Hemorrhagic volume and lesion volume were assessed by Drabkin's method and morphometric assay, respectively. Pro-inflammatory cytokine (TNF-α, IL-1β, and IL-6) expression was evaluated by enzyme-linked immunosorbent assay (ELISA). Microglial activation and neuronal loss were evaluated by immunohistochemistry. Administration of UCN reduced neurological deficits from 1 to 7 days post-ICH. Surprisingly, although a higher dose (25 μg/kg, i.p.) also reduced the functional deficits associated with ICH, it is significantly less effective than the lower dose (2.5 μg/kg, i.p.). Beneficial results with the low dose of UCN included a reduction in neurological deficits from 1 to 7 days post-ICH, as well as a reduction in brain edema, BBB disruption, lesion volume, microglial activation and neuronal loss 3 days post-ICH, and suppression of TNF-α, IL-1β, and IL-6 production 1, 3 and 7 days post-ICH.",
'In type theory, the successor function $S$ is used to represent the next number in the sequence. When you apply the successor function $S$ three times to the number 0, you get:\n\n1. $S(0)$, which represents 1.\n2. $S(S(0))$, which represents 2.\n3. $S(S(S(0)))$, which represents 3.\n\nSo, the result of applying the successor function $S$ three times to the number 0 in type theory is 3.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Logging
- Dataset:
ir-eval
- Evaluated with
main.LoggingEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.9291 |
cosine_accuracy@3 | 0.9819 |
cosine_accuracy@5 | 0.9934 |
cosine_accuracy@10 | 0.9984 |
cosine_precision@1 | 0.9291 |
cosine_precision@3 | 0.3273 |
cosine_precision@5 | 0.1987 |
cosine_recall@1 | 0.9291 |
cosine_recall@3 | 0.9819 |
cosine_recall@5 | 0.9934 |
cosine_ndcg@10 | 0.967 |
cosine_mrr@10 | 0.9565 |
cosine_map@100 | 0.9566 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 98,112 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 type string string details - min: 6 tokens
- mean: 44.14 tokens
- max: 512 tokens
- min: 12 tokens
- mean: 321.5 tokens
- max: 512 tokens
- Samples:
sentence_0 sentence_1 Are transcobalamin II receptor polymorphisms associated with increased risk for neural tube defects?
Women who have low cobalamin (vitamin B(12)) levels are at increased risk for having children with neural tube defects (NTDs). The transcobalamin II receptor (TCblR) mediates uptake of cobalamin into cells. Inherited variants in the TCblR gene as NTD risk factors were evaluated. Case-control and family-based tests of association were used to screen common variation in TCblR as genetic risk factors for NTDs in a large Irish group. A confirmatory group of NTD triads was used to test positive findings. 2 tightly linked variants associated with NTDs in a recessive model were found: TCblR rs2336573 (G220R; p(corr)=0.0080, corrected for multiple hypothesis testing) and TCblR rs9426 (p(corr)=0.0279). These variants were also associated with NTDs in a family-based test before multiple test correction (log-linear analysis of a recessive model: rs2336573 (G220R; RR=6.59, p=0.0037) and rs9426 (RR=6.71, p=0.0035)). A copy number variant distal to TCblR and two previously unreported exonic insertio...
A company produces three products: Product A, B, and C. The monthly sales figures and marketing expenses (in thousands of dollars) for each product for the last six months are given below:
Product Consider a basketball player who has a free-throw shooting percentage of 80%. The player attempts 10 free throws in a game.
If the player makes a free throw, there is an 80% chance that they will make their next free throw attempt. If they miss a free throw, there's a 60% chance that they will make their next free throw attempt.
What is the probability that the player makes exactly 7 out of their 10 free throw attempts?To solve this problem, we can use the concept of conditional probability and the binomial theorem. Let's denote the probability of making a free throw after a successful attempt as P(S) = 0.8 and the probability of making a free throw after a missed attempt as P(M) = 0.6.
We need to find the probability of making exactly 7 out of 10 free throw attempts. There are multiple ways this can happen, and we need to consider all possible sequences of 7 successes (S) and 3 misses (M). We can represent these sequences as a string of S and M, for example, SSSSSSSMMM.
There are C(10, 7) = 10! / (7! * 3!) = 120 ways to arrange 7 successes and 3 misses in a sequence of 10 attempts. For each of these sequences, we can calculate the probability of that specific sequence occurring and then sum up the probabilities of all sequences.
Let's calculate the probability of a specific sequence. For example, consider the sequence SSSSSSSMMM. The probability of this sequence occurring is:
P(SSSSSSSMMM) = P(S... - Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 32per_device_eval_batch_size
: 32num_train_epochs
: 1batch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 32per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss | ir-eval_cosine_ndcg@10 |
---|---|---|---|
0.1631 | 500 | 0.0634 | 0.9563 |
0.3262 | 1000 | 0.005 | 0.9627 |
0.4892 | 1500 | 0.0037 | 0.9631 |
0.6523 | 2000 | 0.0029 | 0.9660 |
0.8154 | 2500 | 0.0033 | 0.9663 |
0.9785 | 3000 | 0.0027 | 0.9670 |
1.0 | 3066 | - | 0.9670 |
Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for sucharush/MNLP_M3_document_encoder
Base model
thenlper/gte-smallEvaluation results
- Cosine Accuracy@1 on ir evalself-reported0.929
- Cosine Accuracy@3 on ir evalself-reported0.982
- Cosine Accuracy@5 on ir evalself-reported0.993
- Cosine Accuracy@10 on ir evalself-reported0.998
- Cosine Precision@1 on ir evalself-reported0.929
- Cosine Precision@3 on ir evalself-reported0.327
- Cosine Precision@5 on ir evalself-reported0.199
- Cosine Recall@1 on ir evalself-reported0.929
- Cosine Recall@3 on ir evalself-reported0.982
- Cosine Recall@5 on ir evalself-reported0.993