Instructions to use zeroentropy/zerank-2-reranker with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use zeroentropy/zerank-2-reranker with sentence-transformers:
from sentence_transformers import CrossEncoder model = CrossEncoder("zeroentropy/zerank-2-reranker") query = "Which planet is known as the Red Planet?" passages = [ "Venus is often called Earth's twin because of its similar size and proximity.", "Mars, known for its reddish appearance, is often referred to as the Red Planet.", "Jupiter, the largest planet in our solar system, has a prominent red spot.", "Saturn, famous for its rings, is sometimes mistaken for the Red Planet." ] scores = model.predict([(query, passage) for passage in passages]) print(scores) - Notebooks
- Google Colab
- Kaggle
Breaking change: predict() now returns raw logits (was sigmoid in [0, 1])
#9
pinned
by dilawarm - opened
Heads-up for existing users: as of the May 2026 sentence-transformers v5.4 integration (merged from #8), CrossEncoder("zeroentropy/zerank-2").predict(...) returns raw "Yes" logits in bf16 instead of the previous sigmoid'd probabilities in [0, 1].
What changed
predict()returns raw logits (e.g.~5.58, ~-4.50) instead of~0.75, ~0.29.trust_remote_code=Trueis no longer required; the bundledmodeling_zeranker.pywas removed.- Rankings are unchanged. NDCG@10 verified equivalent on
mteb/scidocs-reranking.
Migration
If your code thresholds on predict() output, apply (scores / 5).sigmoid() to recover the previous semantics:
scores = model.predict(pairs, convert_to_tensor=True)
probabilities = (scores / 5).sigmoid()
If you only use the scores for ranking (sort or top-k), no change is needed.
npip99 pinned discussion