Usage
pip install -U FlagEmbedding
Generate embedding for text (only Dense)
import torch
from FlagEmbedding import FlagModel
model_name = "puppyyyo/larceny-base-law-knowledge-v1"
devices = "cuda:0" if torch.cuda.is_available() else "cpu"
model = FlagModel(
model_name,
devices=devices,
use_fp16=False
)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# base-v1
# [[0.72338223 0.7122297 ], [0.5691198 0.78866345]]
# base-v2
# [[0.6811399 0.5206243 ], [0.50919324 0.676651 ]]
# base-v3
# [[0.6299723 0.5048096 ], [0.45474052 0.63200176]]
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for puppyyyo/larceny-base-ICT_v1
Base model
BAAI/bge-base-zh-v1.5