uaritm/lik_neuro_202

This is a SetFit model that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Usage

This model works great with medical texts (embeddings) in Ukrainian. The best option is to cover neurology, psychiatry, cardiology.

To use this model for inference, first install the SetFit library:

python -m pip install setfit

You can then run inference as follows:

from setfit import SetFitModel

# Download from Hub and run inference
model = SetFitModel.from_pretrained("uaritm/lik_neuro_202")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐Ÿคฎ"])

BibTeX entry and citation info

@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}

Citing & Authors

@misc{UARITM, title={sentence-transformers: Semantic similarity of medical texts ukr, kor, eng}, author={Vitaliy Ostashko}, year={2025}, url={https://ai.esemi.org}, }

Downloads last month
79
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support