NOTE: You will need a recent build of llama.cpp to run these quants (i.e. at least commit 494c870).

GGUF importance matrix (imatrix) quants for https://huggingface.co/TechxGenus/starcoder2-15b-instruct

Fine-tuned starcoder2-15b with an additional 0.7 billion high-quality, code-related tokens for 3 epochs. We used DeepSpeed ZeRO 3 and Flash Attention 2 to accelerate the training process. It achieves 77.4 pass@1 on HumanEval-Python. This model operates using the Alpaca instruction format (excluding the system prompt).

Layers Context Template
40
16384
### Instruction
{instruction}
### Response
{response}
Downloads last month
1
GGUF
Model size
16B params
Architecture
starcoder2
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dranger003/starcoder2-15b-instruct-iMat.GGUF

Quantized
(3)
this model