--- base_model: swiss-ai/Apertus-8B-Instruct-2509 extra_gated_button_content: Submit extra_gated_fields: Affiliation: text By clicking Submit below I accept the terms of use: checkbox Country: country Your Name: text geo: ip_location extra_gated_prompt: "### Apertus LLM Acceptable Use Policy \n(1.0 | September 1, 2025)\n\"Agreement\" The Swiss National AI Institute (SNAI) is a partnership between the two Swiss Federal Institutes of Technology, ETH Zurich and EPFL. \n\nBy using the Apertus LLM you agree to indemnify, defend, and hold harmless ETH Zurich and EPFL against any third-party claims arising from your use of Apertus LLM. \n\nThe training data and the Apertus LLM may contain or generate information that directly or indirectly refers to an identifiable individual (Personal Data). You process Personal Data as independent controller in accordance with applicable data protection law. SNAI will regularly provide a file with hash values for download which you can apply as an output filter to your use of our Apertus LLM. The file reflects data protection deletion requests which have been addressed to SNAI as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output. We strongly advise downloading and applying this output filter from SNAI every six months following the release of the model. " language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - multilingual - compliant - swiss-ai - apertus --- ## About static quants of https://huggingface.co/swiss-ai/Apertus-8B-Instruct-2509 ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Apertus-8B-Instruct-2509-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Apertus-8B-Instruct-2509-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Apertus-8B-Instruct-2509-GGUF/resolve/main/Apertus-8B-Instruct-2509.Q2_K.gguf) | Q2_K | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Apertus-8B-Instruct-2509-GGUF/resolve/main/Apertus-8B-Instruct-2509.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Apertus-8B-Instruct-2509-GGUF/resolve/main/Apertus-8B-Instruct-2509.Q3_K_M.gguf) | Q3_K_M | 4.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Apertus-8B-Instruct-2509-GGUF/resolve/main/Apertus-8B-Instruct-2509.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Apertus-8B-Instruct-2509-GGUF/resolve/main/Apertus-8B-Instruct-2509.Q3_K_L.gguf) | Q3_K_L | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/Apertus-8B-Instruct-2509-GGUF/resolve/main/Apertus-8B-Instruct-2509.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Apertus-8B-Instruct-2509-GGUF/resolve/main/Apertus-8B-Instruct-2509.Q4_K_M.gguf) | Q4_K_M | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Apertus-8B-Instruct-2509-GGUF/resolve/main/Apertus-8B-Instruct-2509.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Apertus-8B-Instruct-2509-GGUF/resolve/main/Apertus-8B-Instruct-2509.Q5_K_M.gguf) | Q5_K_M | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Apertus-8B-Instruct-2509-GGUF/resolve/main/Apertus-8B-Instruct-2509.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Apertus-8B-Instruct-2509-GGUF/resolve/main/Apertus-8B-Instruct-2509.Q8_0.gguf) | Q8_0 | 8.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Apertus-8B-Instruct-2509-GGUF/resolve/main/Apertus-8B-Instruct-2509.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.