gte-small GGUF
GGUF format of thenlper/gte-small for use with CrispEmbed and Ollama.
Files
| File | Quantization | Size |
|---|---|---|
| gte-small-f32.gguf | F32 | 0 MB |
| gte-small-q4_k.gguf | Q4_K | 0 MB |
| gte-small-q8_0.gguf | Q8_0 | 0 MB |
| gte-small.gguf | F32 | 0 MB |
Recommended: Q8_0 for quality (cos vs HF: 0.9999), Q4_K for size (0.991).
Quick Start
CrispEmbed
./crispembed -m gte-small "Hello world"
./crispembed-server -m gte-small --port 8080
Ollama (with CrispStrobe fork)
# Create model
echo "FROM gte-small-q8_0.gguf" > Modelfile
ollama create gte-small -f Modelfile
# Embed
curl http://localhost:11434/api/embed -d '{"model":"gte-small","input":["Hello world"]}'
Python (CrispEmbed)
from crispembed import CrispEmbed
model = CrispEmbed("gte-small-q8_0.gguf")
vectors = model.encode(["Hello world", "Goodbye world"])
Model Details
| Property | Value |
|---|---|
| Architecture | BERT |
| Parameters | 33M |
| Embedding Dimension | 384 |
| Layers | 12 |
| Pooling | mean |
| Tokenizer | WordPiece |
| Language | en |
| Q8_0 vs HuggingFace | 0.9999 |
| Q4_K vs HuggingFace | 0.991 |
Server API
CrispEmbed server supports four API dialects:
POST /embedโ nativePOST /v1/embeddingsโ OpenAI-compatiblePOST /api/embedโ Ollama-compatiblePOST /api/embeddingsโ Ollama legacy
Credits
- Original model: thenlper/gte-small
- Inference: CrispEmbed (MIT, ggml-based)
- Downloads last month
- 348
Hardware compatibility
Log In to add your hardware
Model tree for cstr/gte-small-GGUF
Base model
thenlper/gte-small