Spaetzle-v60-7b

This is progressive (mostly dare-ties, but also slerp) merge with the intention of suitable compromise for English and German local tasks. The performance looks ok so far: e.g. we get in EQ-Bench: Score (v2_de): 65.08 (Parseable: 171.0).

Spaetzle-v60-7b is a merge of the following models

Downloads last month
3
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cstr/Spaetzle-v60-7b-GGUF

Quantized
(7)
this model

Collection including cstr/Spaetzle-v60-7b-GGUF