gte-small GGUF

GGUF format of thenlper/gte-small for use with CrispEmbed and Ollama.

Files

File Quantization Size
gte-small-f32.gguf F32 0 MB
gte-small-q4_k.gguf Q4_K 0 MB
gte-small-q8_0.gguf Q8_0 0 MB
gte-small.gguf F32 0 MB

Recommended: Q8_0 for quality (cos vs HF: 0.9999), Q4_K for size (0.991).

Quick Start

CrispEmbed

./crispembed -m gte-small "Hello world"
./crispembed-server -m gte-small --port 8080

Ollama (with CrispStrobe fork)

# Create model
echo "FROM gte-small-q8_0.gguf" > Modelfile
ollama create gte-small -f Modelfile

# Embed
curl http://localhost:11434/api/embed -d '{"model":"gte-small","input":["Hello world"]}'

Python (CrispEmbed)

from crispembed import CrispEmbed
model = CrispEmbed("gte-small-q8_0.gguf")
vectors = model.encode(["Hello world", "Goodbye world"])

Model Details

Property Value
Architecture BERT
Parameters 33M
Embedding Dimension 384
Layers 12
Pooling mean
Tokenizer WordPiece
Language en
Q8_0 vs HuggingFace 0.9999
Q4_K vs HuggingFace 0.991

Server API

CrispEmbed server supports four API dialects:

  • POST /embed โ€” native
  • POST /v1/embeddings โ€” OpenAI-compatible
  • POST /api/embed โ€” Ollama-compatible
  • POST /api/embeddings โ€” Ollama legacy

Credits

Downloads last month
348
GGUF
Model size
33.4M params
Architecture
bert
Hardware compatibility
Log In to add your hardware

8-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cstr/gte-small-GGUF

Quantized
(9)
this model