The improved 30M variant of Veyra, these models are meant for local CPU inference.
-
veyra-ai/veyra2-30m-base-2b-tokens
Text Generation • 34.6M • Updated • 254 -
veyra-ai/veyra2-30m-base-2b-tokens-gguf
Text Generation • 34.6M • Updated • 243 -
veyra-ai/veyra2-30m-base-2b-tokens-onnx-int8
Text Generation • Updated • 52 -
veyra-ai/veyra2-15m-base-1b-tokens
Text Generation • 14.7M • Updated • 108