File size: 6,666 Bytes
8e00196 2fa7f49 8e00196 1e8ba73 dc99f56 1e8ba73 dc99f56 1e8ba73 8e00196 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 |
---
tags:
- generated_from_trainer
- code
- coding
- gemma
- TensorBlock
- GGUF
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
language:
- code
thumbnail: https://huggingface.co/mrm8488/gemma-2b-coder/resolve/main/logo.png
datasets:
- HuggingFaceH4/CodeAlpaca_20K
pipeline_tag: text-generation
base_model: MAISAAI/gemma-2b-coder
model-index:
- name: gemma-2b-coder
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## MAISAAI/gemma-2b-coder - GGUF
This repo contains GGUF format model files for [MAISAAI/gemma-2b-coder](https://huggingface.co/MAISAAI/gemma-2b-coder).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π Try it now! π</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gemma-2b-coder-Q2_K.gguf](https://huggingface.co/tensorblock/gemma-2b-coder-GGUF/blob/main/gemma-2b-coder-Q2_K.gguf) | Q2_K | 1.158 GB | smallest, significant quality loss - not recommended for most purposes |
| [gemma-2b-coder-Q3_K_S.gguf](https://huggingface.co/tensorblock/gemma-2b-coder-GGUF/blob/main/gemma-2b-coder-Q3_K_S.gguf) | Q3_K_S | 1.288 GB | very small, high quality loss |
| [gemma-2b-coder-Q3_K_M.gguf](https://huggingface.co/tensorblock/gemma-2b-coder-GGUF/blob/main/gemma-2b-coder-Q3_K_M.gguf) | Q3_K_M | 1.384 GB | very small, high quality loss |
| [gemma-2b-coder-Q3_K_L.gguf](https://huggingface.co/tensorblock/gemma-2b-coder-GGUF/blob/main/gemma-2b-coder-Q3_K_L.gguf) | Q3_K_L | 1.466 GB | small, substantial quality loss |
| [gemma-2b-coder-Q4_0.gguf](https://huggingface.co/tensorblock/gemma-2b-coder-GGUF/blob/main/gemma-2b-coder-Q4_0.gguf) | Q4_0 | 1.551 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gemma-2b-coder-Q4_K_S.gguf](https://huggingface.co/tensorblock/gemma-2b-coder-GGUF/blob/main/gemma-2b-coder-Q4_K_S.gguf) | Q4_K_S | 1.560 GB | small, greater quality loss |
| [gemma-2b-coder-Q4_K_M.gguf](https://huggingface.co/tensorblock/gemma-2b-coder-GGUF/blob/main/gemma-2b-coder-Q4_K_M.gguf) | Q4_K_M | 1.630 GB | medium, balanced quality - recommended |
| [gemma-2b-coder-Q5_0.gguf](https://huggingface.co/tensorblock/gemma-2b-coder-GGUF/blob/main/gemma-2b-coder-Q5_0.gguf) | Q5_0 | 1.799 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gemma-2b-coder-Q5_K_S.gguf](https://huggingface.co/tensorblock/gemma-2b-coder-GGUF/blob/main/gemma-2b-coder-Q5_K_S.gguf) | Q5_K_S | 1.799 GB | large, low quality loss - recommended |
| [gemma-2b-coder-Q5_K_M.gguf](https://huggingface.co/tensorblock/gemma-2b-coder-GGUF/blob/main/gemma-2b-coder-Q5_K_M.gguf) | Q5_K_M | 1.840 GB | large, very low quality loss - recommended |
| [gemma-2b-coder-Q6_K.gguf](https://huggingface.co/tensorblock/gemma-2b-coder-GGUF/blob/main/gemma-2b-coder-Q6_K.gguf) | Q6_K | 2.062 GB | very large, extremely low quality loss |
| [gemma-2b-coder-Q8_0.gguf](https://huggingface.co/tensorblock/gemma-2b-coder-GGUF/blob/main/gemma-2b-coder-Q8_0.gguf) | Q8_0 | 2.669 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/gemma-2b-coder-GGUF --include "gemma-2b-coder-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/gemma-2b-coder-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|