modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-03 00:36:49
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
535 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-03 00:36:49
card
stringlengths
11
1.01M
mradermacher/ANIMA-biodesign-7B-slerp-GGUF
mradermacher
2024-05-06T05:25:07Z
7
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp", "Severian/ANIMA-Neural-Hermes", "en", "base_model:allknowingroger/ANIMA-biodesign-7B-slerp", "base_model:quantized:allknowingroger/ANIMA-biodesign-7B-slerp", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-01T12:00:25Z
--- base_model: allknowingroger/ANIMA-biodesign-7B-slerp language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp - Severian/ANIMA-Neural-Hermes --- ## About static quants of https://huggingface.co/allknowingroger/ANIMA-biodesign-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ANIMA-biodesign-7B-slerp-GGUF/resolve/main/ANIMA-biodesign-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Yeet_51b_200k-GGUF
mradermacher
2024-05-06T05:24:43Z
106
0
transformers
[ "transformers", "gguf", "en", "base_model:MarsupialAI/Yeet_51b_200k", "base_model:quantized:MarsupialAI/Yeet_51b_200k", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-01T16:03:18Z
--- base_model: MarsupialAI/Yeet_51b_200k language: - en library_name: transformers license: other license_name: yi-other quantized_by: mradermacher --- ## About static quants of https://huggingface.co/MarsupialAI/Yeet_51b_200k <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q2_K.gguf) | Q2_K | 19.6 | | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.IQ3_XS.gguf) | IQ3_XS | 21.7 | | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q3_K_S.gguf) | Q3_K_S | 22.8 | | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.IQ3_S.gguf) | IQ3_S | 22.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.IQ3_M.gguf) | IQ3_M | 23.7 | | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q3_K_M.gguf) | Q3_K_M | 25.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q3_K_L.gguf) | Q3_K_L | 27.6 | | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.IQ4_XS.gguf) | IQ4_XS | 28.3 | | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q4_K_S.gguf) | Q4_K_S | 29.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q4_K_M.gguf) | Q4_K_M | 31.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q5_K_S.gguf) | Q5_K_S | 35.9 | | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q5_K_M.gguf) | Q5_K_M | 36.8 | | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q6_K.gguf) | Q6_K | 42.6 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF/resolve/main/Yeet_51b_200k.Q8_0.gguf.part2of2) | Q8_0 | 54.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/TimeMax-20B-GGUF
mradermacher
2024-05-06T05:24:37Z
2
0
transformers
[ "transformers", "gguf", "en", "base_model:R136a1/TimeMax-20B", "base_model:quantized:R136a1/TimeMax-20B", "endpoints_compatible", "region:us" ]
null
2024-04-01T16:14:35Z
--- base_model: R136a1/TimeMax-20B language: - en library_name: transformers quantized_by: mradermacher --- ## About static quants of https://huggingface.co/R136a1/TimeMax-20B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q2_K.gguf) | Q2_K | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.IQ3_XS.gguf) | IQ3_XS | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.IQ3_S.gguf) | IQ3_S | 9.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q3_K_S.gguf) | Q3_K_S | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.IQ3_M.gguf) | IQ3_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q3_K_M.gguf) | Q3_K_M | 10.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q3_K_L.gguf) | Q3_K_L | 10.9 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.IQ4_XS.gguf) | IQ4_XS | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q4_K_S.gguf) | Q4_K_S | 11.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q4_K_M.gguf) | Q4_K_M | 12.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q5_K_S.gguf) | Q5_K_S | 14.1 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q5_K_M.gguf) | Q5_K_M | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q6_K.gguf) | Q6_K | 16.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-GGUF/resolve/main/TimeMax-20B.Q8_0.gguf) | Q8_0 | 21.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/TimeMax-20B-i1-GGUF
mradermacher
2024-05-06T05:24:20Z
6
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "text-generation", "en", "base_model:R136a1/TimeMax-20B", "base_model:quantized:R136a1/TimeMax-20B", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T20:10:41Z
--- base_model: R136a1/TimeMax-20B language: - en library_name: transformers pipeline_tag: text-generation quantized_by: mradermacher tags: - mergekit - merge --- ## About weighted/imatrix quants of https://huggingface.co/R136a1/TimeMax-20B **Only 50k tokens from my standard set have been used, as more caused an overflow. This is likely a problem with the model itself.** <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/TimeMax-20B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ2_S.gguf) | i1-IQ2_S | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ2_M.gguf) | i1-IQ2_M | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q2_K.gguf) | i1-Q2_K | 7.7 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ3_S.gguf) | i1-IQ3_S | 9.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ3_M.gguf) | i1-IQ3_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q4_0.gguf) | i1-Q4_0 | 11.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 14.1 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/TimeMax-20B-i1-GGUF/resolve/main/TimeMax-20B.i1-Q6_K.gguf) | i1-Q6_K | 16.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/NeuralStock-7B-GGUF
mradermacher
2024-05-06T05:24:12Z
73
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "liminerity/M7-7b", "Gille/StrangeMerges_32-7B-slerp", "automerger/YamShadow-7B", "en", "base_model:Kukedlc/NeuralStock-7B", "base_model:quantized:Kukedlc/NeuralStock-7B", "endpoints_compatible", "region:us" ]
null
2024-04-01T21:52:33Z
--- base_model: Kukedlc/NeuralStock-7B language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - liminerity/M7-7b - Gille/StrangeMerges_32-7B-slerp - automerger/YamShadow-7B --- ## About static quants of https://huggingface.co/Kukedlc/NeuralStock-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NeuralStock-7B-GGUF/resolve/main/NeuralStock-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/WinterGoliath-123b-32k-i1-GGUF
mradermacher
2024-05-06T05:24:09Z
8
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "en", "base_model:ChuckMcSneed/WinterGoliath-123b-32k", "base_model:quantized:ChuckMcSneed/WinterGoliath-123b-32k", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-01T22:22:44Z
--- base_model: ChuckMcSneed/WinterGoliath-123b-32k language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - merge - mergekit --- ## About weighted/imatrix quants of https://huggingface.co/ChuckMcSneed/WinterGoliath-123b-32k <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/WinterGoliath-123b-32k-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ1_S.gguf) | i1-IQ1_S | 26.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ1_M.gguf) | i1-IQ1_M | 28.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 33.2 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 36.8 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ2_S.gguf) | i1-IQ2_S | 38.6 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ2_M.gguf) | i1-IQ2_M | 42.0 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q2_K.gguf) | i1-Q2_K | 45.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 47.9 | lower quality | | [PART 1](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 51.0 | | | [PART 1](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 53.7 | IQ3_XS probably better | | [PART 1](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 53.9 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 55.7 | | | [PART 1](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 59.9 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 65.2 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 66.4 | | | [PART 1](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 70.4 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 70.6 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 74.6 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 85.5 | | | [PART 1](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 87.8 | | | [PART 1](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/WinterGoliath-123b-32k-i1-GGUF/resolve/main/WinterGoliath-123b-32k.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 101.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Zebrafish-linear-7B-GGUF
mradermacher
2024-05-06T05:24:05Z
13
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:mlabonne/Zebrafish-linear-7B", "base_model:quantized:mlabonne/Zebrafish-linear-7B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-02T00:44:08Z
--- base_model: mlabonne/Zebrafish-linear-7B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About static quants of https://huggingface.co/mlabonne/Zebrafish-linear-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Zebrafish-linear-7B-GGUF/resolve/main/Zebrafish-linear-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MonarchPipe-7B-slerp-GGUF
mradermacher
2024-05-06T05:23:52Z
25
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1227", "mlabonne/AlphaMonarch-7B", "en", "base_model:ichigoberry/MonarchPipe-7B-slerp", "base_model:quantized:ichigoberry/MonarchPipe-7B-slerp", "license:cc-by-nc-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-02T02:11:17Z
--- base_model: ichigoberry/MonarchPipe-7B-slerp language: - en library_name: transformers license: cc-by-nc-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1227 - mlabonne/AlphaMonarch-7B --- ## About static quants of https://huggingface.co/ichigoberry/MonarchPipe-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MonarchPipe-7B-slerp-GGUF/resolve/main/MonarchPipe-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/UNAversal-8x7B-v1beta-GGUF
mradermacher
2024-05-06T05:23:33Z
69
0
transformers
[ "transformers", "gguf", "UNA", "juanako", "mixtral", "MoE", "en", "base_model:fblgit/UNAversal-8x7B-v1beta", "base_model:quantized:fblgit/UNAversal-8x7B-v1beta", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-02T05:11:23Z
--- base_model: fblgit/UNAversal-8x7B-v1beta language: - en library_name: transformers license: cc-by-nc-sa-4.0 quantized_by: mradermacher tags: - UNA - juanako - mixtral - MoE --- ## About static quants of https://huggingface.co/fblgit/UNAversal-8x7B-v1beta <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q2_K.gguf) | Q2_K | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.IQ3_XS.gguf) | IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.IQ3_S.gguf) | IQ3_S | 20.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q3_K_S.gguf) | Q3_K_S | 20.7 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.IQ3_M.gguf) | IQ3_M | 21.7 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q3_K_M.gguf) | Q3_K_M | 22.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q3_K_L.gguf) | Q3_K_L | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.IQ4_XS.gguf) | IQ4_XS | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q4_K_S.gguf) | Q4_K_S | 27.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q4_K_M.gguf) | Q4_K_M | 28.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q5_K_S.gguf) | Q5_K_S | 32.5 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q5_K_M.gguf) | Q5_K_M | 33.5 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q6_K.gguf) | Q6_K | 38.6 | very good quality | | [PART 1](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF/resolve/main/UNAversal-8x7B-v1beta.Q8_0.gguf.part2of2) | Q8_0 | 49.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/bagel-dpo-20b-v04-GGUF
mradermacher
2024-05-06T05:23:25Z
206
2
transformers
[ "transformers", "gguf", "en", "dataset:ai2_arc", "dataset:allenai/ultrafeedback_binarized_cleaned", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "dataset:jondurbin/airoboros-3.2", "dataset:codeparrot/apps", "dataset:facebook/belebele", "dataset:bluemoon-fandom-1-1-rp-cleaned", "dataset:boolq", "dataset:camel-ai/biology", "dataset:camel-ai/chemistry", "dataset:camel-ai/math", "dataset:camel-ai/physics", "dataset:jondurbin/contextual-dpo-v0.1", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:jondurbin/py-dpo-v0.1", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:LDJnr/Capybara", "dataset:jondurbin/cinematika-v0.1", "dataset:WizardLM/WizardLM_evol_instruct_70k", "dataset:glaiveai/glaive-function-calling-v2", "dataset:grimulkan/LimaRP-augmented", "dataset:lmsys/lmsys-chat-1m", "dataset:ParisNeo/lollms_aware_dataset", "dataset:TIGER-Lab/MathInstruct", "dataset:Muennighoff/natural-instructions", "dataset:openbookqa", "dataset:kingbri/PIPPA-shareGPT", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:ropes", "dataset:cakiki/rosetta-code", "dataset:Open-Orca/SlimOrca", "dataset:b-mc2/sql-create-context", "dataset:squad_v2", "dataset:mattpscott/airoboros-summarization", "dataset:migtissera/Synthia-v1.3", "dataset:unalignment/toxic-dpo-v0.2", "dataset:WhiteRabbitNeo/WRN-Chapter-1", "dataset:WhiteRabbitNeo/WRN-Chapter-2", "dataset:winogrande", "base_model:jondurbin/bagel-dpo-20b-v04", "base_model:quantized:jondurbin/bagel-dpo-20b-v04", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-02T06:31:44Z
--- base_model: jondurbin/bagel-dpo-20b-v04 datasets: - ai2_arc - allenai/ultrafeedback_binarized_cleaned - argilla/distilabel-intel-orca-dpo-pairs - jondurbin/airoboros-3.2 - codeparrot/apps - facebook/belebele - bluemoon-fandom-1-1-rp-cleaned - boolq - camel-ai/biology - camel-ai/chemistry - camel-ai/math - camel-ai/physics - jondurbin/contextual-dpo-v0.1 - jondurbin/gutenberg-dpo-v0.1 - jondurbin/py-dpo-v0.1 - jondurbin/truthy-dpo-v0.1 - LDJnr/Capybara - jondurbin/cinematika-v0.1 - WizardLM/WizardLM_evol_instruct_70k - glaiveai/glaive-function-calling-v2 - jondurbin/gutenberg-dpo-v0.1 - grimulkan/LimaRP-augmented - lmsys/lmsys-chat-1m - ParisNeo/lollms_aware_dataset - TIGER-Lab/MathInstruct - Muennighoff/natural-instructions - openbookqa - kingbri/PIPPA-shareGPT - piqa - Vezora/Tested-22k-Python-Alpaca - ropes - cakiki/rosetta-code - Open-Orca/SlimOrca - b-mc2/sql-create-context - squad_v2 - mattpscott/airoboros-summarization - migtissera/Synthia-v1.3 - unalignment/toxic-dpo-v0.2 - WhiteRabbitNeo/WRN-Chapter-1 - WhiteRabbitNeo/WRN-Chapter-2 - winogrande language: - en library_name: transformers license: other license_link: https://huggingface.co/internlm/internlm2-20b#open-source-license license_name: internlm2-20b quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jondurbin/bagel-dpo-20b-v04 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/bagel-dpo-20b-v04-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q2_K.gguf) | Q2_K | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.IQ3_XS.gguf) | IQ3_XS | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q3_K_S.gguf) | Q3_K_S | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.IQ3_S.gguf) | IQ3_S | 9.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.IQ3_M.gguf) | IQ3_M | 9.9 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q3_K_M.gguf) | Q3_K_M | 10.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q3_K_L.gguf) | Q3_K_L | 11.3 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.IQ4_XS.gguf) | IQ4_XS | 11.6 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q4_K_S.gguf) | Q4_K_S | 12.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q4_K_M.gguf) | Q4_K_M | 12.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q5_K_S.gguf) | Q5_K_S | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q5_K_M.gguf) | Q5_K_M | 14.8 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q6_K.gguf) | Q6_K | 17.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.Q8_0.gguf) | Q8_0 | 21.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-20b-v04-GGUF/resolve/main/bagel-dpo-20b-v04.SOURCE.gguf) | SOURCE | 39.8 | source gguf, only provided when it was hard to come by | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Yeet_51b_200k-i1-GGUF
mradermacher
2024-05-06T05:23:14Z
30
0
transformers
[ "transformers", "gguf", "en", "base_model:MarsupialAI/Yeet_51b_200k", "base_model:quantized:MarsupialAI/Yeet_51b_200k", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-02T09:58:07Z
--- base_model: MarsupialAI/Yeet_51b_200k language: - en library_name: transformers license: other license_name: yi-other no_imatrix: 'IQ3_XXS GGML_ASSERT: llama.cpp/ggml-quants.c:11239: grid_index >= 0' quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/MarsupialAI/Yeet_51b_200k **No more quants forthcoming, as llama.cpp crashes.** <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Yeet_51b_200k-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q2_K.gguf) | i1-Q2_K | 19.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 22.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 25.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 27.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q4_0.gguf) | i1-Q4_0 | 29.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 29.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 31.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 35.9 | | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 36.8 | | | [GGUF](https://huggingface.co/mradermacher/Yeet_51b_200k-i1-GGUF/resolve/main/Yeet_51b_200k.i1-Q6_K.gguf) | i1-Q6_K | 42.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Maxine-7B-0401-ties-GGUF
mradermacher
2024-05-06T05:23:06Z
1
1
transformers
[ "transformers", "gguf", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-02T12:15:43Z
--- base_model: louisbrulenaudet/Maxine-7B-0401-ties language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/louisbrulenaudet/Maxine-7B-0401-ties <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Maxine-7B-0401-ties-GGUF/resolve/main/Maxine-7B-0401-ties.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
nuebaek/komt_mistral_mss_user_111_max_steps_80
nuebaek
2024-05-06T05:22:39Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-06T05:19:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/MultiVerse_70B-GGUF
mradermacher
2024-05-06T05:22:12Z
22
4
transformers
[ "transformers", "gguf", "en", "base_model:MTSAIR/MultiVerse_70B", "base_model:quantized:MTSAIR/MultiVerse_70B", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-02T19:02:24Z
--- base_model: MTSAIR/MultiVerse_70B language: - en library_name: transformers license: other license_link: https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE license_name: qwen quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/MTSAIR/MultiVerse_70B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q2_K.gguf) | Q2_K | 28.6 | | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.IQ3_XS.gguf) | IQ3_XS | 31.5 | | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.IQ3_S.gguf) | IQ3_S | 33.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q3_K_S.gguf) | Q3_K_S | 33.1 | | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.IQ3_M.gguf) | IQ3_M | 34.8 | | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q3_K_M.gguf) | Q3_K_M | 36.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q3_K_L.gguf) | Q3_K_L | 40.1 | | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.IQ4_XS.gguf) | IQ4_XS | 40.7 | | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q4_K_S.gguf) | Q4_K_S | 42.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q4_K_M.gguf) | Q4_K_M | 45.3 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | | | [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q5_K_M.gguf.part2of2) | Q5_K_M | 52.9 | | | [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q6_K.gguf.part2of2) | Q6_K | 60.9 | very good quality | | [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q8_0.gguf.part2of2) | Q8_0 | 78.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/HyouKan-3x7B-GGUF
mradermacher
2024-05-06T05:22:02Z
53
1
transformers
[ "transformers", "gguf", "moe", "merge", "Roleplay", "en", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-02T23:18:27Z
--- base_model: Alsebay/HyouKan-3x7B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - moe - merge - Roleplay --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Alsebay/HyouKan-3x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.Q2_K.gguf) | Q2_K | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.IQ3_XS.gguf) | IQ3_XS | 7.8 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.Q3_K_S.gguf) | Q3_K_S | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.IQ3_S.gguf) | IQ3_S | 8.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.IQ3_M.gguf) | IQ3_M | 8.4 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.Q3_K_M.gguf) | Q3_K_M | 9.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.Q3_K_L.gguf) | Q3_K_L | 9.9 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.IQ4_XS.gguf) | IQ4_XS | 10.3 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.Q4_K_S.gguf) | Q4_K_S | 10.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.Q4_K_M.gguf) | Q4_K_M | 11.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.Q5_K_S.gguf) | Q5_K_S | 13.0 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.Q5_K_M.gguf) | Q5_K_M | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.Q6_K.gguf) | Q6_K | 15.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/resolve/main/HyouKan-3x7B.Q8_0.gguf) | Q8_0 | 19.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/airoboros-34b-3.3-GGUF
mradermacher
2024-05-06T05:21:59Z
60
0
transformers
[ "transformers", "gguf", "en", "dataset:jondurbin/airoboros-3.2", "dataset:bluemoon-fandom-1-1-rp-cleaned", "dataset:boolq", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:LDJnr/Capybara", "dataset:jondurbin/cinematika-v0.1", "dataset:glaiveai/glaive-function-calling-v2", "dataset:grimulkan/LimaRP-augmented", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:mattpscott/airoboros-summarization", "dataset:unalignment/toxic-dpo-v0.2", "base_model:jondurbin/airoboros-34b-3.3", "base_model:quantized:jondurbin/airoboros-34b-3.3", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-02T23:45:26Z
--- base_model: jondurbin/airoboros-34b-3.3 datasets: - jondurbin/airoboros-3.2 - bluemoon-fandom-1-1-rp-cleaned - boolq - jondurbin/gutenberg-dpo-v0.1 - LDJnr/Capybara - jondurbin/cinematika-v0.1 - glaiveai/glaive-function-calling-v2 - grimulkan/LimaRP-augmented - piqa - Vezora/Tested-22k-Python-Alpaca - mattpscott/airoboros-summarization - unalignment/toxic-dpo-v0.2 language: - en library_name: transformers license: other license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE license_name: yi-license quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jondurbin/airoboros-34b-3.3 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q2_K.gguf) | Q2_K | 13.5 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.IQ3_XS.gguf) | IQ3_XS | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q3_K_S.gguf) | Q3_K_S | 15.6 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.IQ3_S.gguf) | IQ3_S | 15.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.IQ3_M.gguf) | IQ3_M | 16.2 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q3_K_M.gguf) | Q3_K_M | 17.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q3_K_L.gguf) | Q3_K_L | 18.8 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.IQ4_XS.gguf) | IQ4_XS | 19.3 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q4_K_S.gguf) | Q4_K_S | 20.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q4_K_M.gguf) | Q4_K_M | 21.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q5_K_S.gguf) | Q5_K_S | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q5_K_M.gguf) | Q5_K_M | 25.0 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q6_K.gguf) | Q6_K | 28.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF/resolve/main/airoboros-34b-3.3.Q8_0.gguf) | Q8_0 | 37.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF
mradermacher
2024-05-06T05:21:54Z
17
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Joseph717171/Mistral-12.25B-Instruct-v0.2", "base_model:quantized:Joseph717171/Mistral-12.25B-Instruct-v0.2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-03T00:13:46Z
--- base_model: Joseph717171/Mistral-12.25B-Instruct-v0.2 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Joseph717171/Mistral-12.25B-Instruct-v0.2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.IQ3_XS.gguf) | IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q3_K_S.gguf) | Q3_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.IQ3_M.gguf) | IQ3_M | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q3_K_L.gguf) | Q3_K_L | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.IQ4_XS.gguf) | IQ4_XS | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q4_K_M.gguf) | Q4_K_M | 7.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q5_K_S.gguf) | Q5_K_S | 8.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q5_K_M.gguf) | Q5_K_M | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q6_K.gguf) | Q6_K | 10.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-12.25B-Instruct-v0.2-GGUF/resolve/main/Mistral-12.25B-Instruct-v0.2.Q8_0.gguf) | Q8_0 | 13.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/bagel-dpo-34b-v0.5-i1-GGUF
mradermacher
2024-05-06T05:21:48Z
238
4
transformers
[ "transformers", "gguf", "en", "dataset:ai2_arc", "dataset:allenai/ultrafeedback_binarized_cleaned", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "dataset:jondurbin/airoboros-3.2", "dataset:codeparrot/apps", "dataset:facebook/belebele", "dataset:bluemoon-fandom-1-1-rp-cleaned", "dataset:boolq", "dataset:camel-ai/biology", "dataset:camel-ai/chemistry", "dataset:camel-ai/math", "dataset:camel-ai/physics", "dataset:jondurbin/contextual-dpo-v0.1", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:jondurbin/py-dpo-v0.1", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:LDJnr/Capybara", "dataset:jondurbin/cinematika-v0.1", "dataset:WizardLM/WizardLM_evol_instruct_70k", "dataset:glaiveai/glaive-function-calling-v2", "dataset:grimulkan/LimaRP-augmented", "dataset:lmsys/lmsys-chat-1m", "dataset:ParisNeo/lollms_aware_dataset", "dataset:TIGER-Lab/MathInstruct", "dataset:Muennighoff/natural-instructions", "dataset:openbookqa", "dataset:kingbri/PIPPA-shareGPT", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:ropes", "dataset:cakiki/rosetta-code", "dataset:Open-Orca/SlimOrca", "dataset:b-mc2/sql-create-context", "dataset:squad_v2", "dataset:mattpscott/airoboros-summarization", "dataset:migtissera/Synthia-v1.3", "dataset:unalignment/toxic-dpo-v0.2", "dataset:WhiteRabbitNeo/WRN-Chapter-1", "dataset:WhiteRabbitNeo/WRN-Chapter-2", "dataset:winogrande", "base_model:jondurbin/bagel-dpo-34b-v0.5", "base_model:quantized:jondurbin/bagel-dpo-34b-v0.5", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-03T00:45:48Z
--- base_model: jondurbin/bagel-dpo-34b-v0.5 datasets: - ai2_arc - allenai/ultrafeedback_binarized_cleaned - argilla/distilabel-intel-orca-dpo-pairs - jondurbin/airoboros-3.2 - codeparrot/apps - facebook/belebele - bluemoon-fandom-1-1-rp-cleaned - boolq - camel-ai/biology - camel-ai/chemistry - camel-ai/math - camel-ai/physics - jondurbin/contextual-dpo-v0.1 - jondurbin/gutenberg-dpo-v0.1 - jondurbin/py-dpo-v0.1 - jondurbin/truthy-dpo-v0.1 - LDJnr/Capybara - jondurbin/cinematika-v0.1 - WizardLM/WizardLM_evol_instruct_70k - glaiveai/glaive-function-calling-v2 - jondurbin/gutenberg-dpo-v0.1 - grimulkan/LimaRP-augmented - lmsys/lmsys-chat-1m - ParisNeo/lollms_aware_dataset - TIGER-Lab/MathInstruct - Muennighoff/natural-instructions - openbookqa - kingbri/PIPPA-shareGPT - piqa - Vezora/Tested-22k-Python-Alpaca - ropes - cakiki/rosetta-code - Open-Orca/SlimOrca - b-mc2/sql-create-context - squad_v2 - mattpscott/airoboros-summarization - migtissera/Synthia-v1.3 - unalignment/toxic-dpo-v0.2 - WhiteRabbitNeo/WRN-Chapter-1 - WhiteRabbitNeo/WRN-Chapter-2 - winogrande language: - en library_name: transformers license: other license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE license_name: yi-license quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/jondurbin/bagel-dpo-34b-v0.5 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-IQ1_S.gguf) | i1-IQ1_S | 8.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-IQ1_M.gguf) | i1-IQ1_M | 8.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-IQ2_S.gguf) | i1-IQ2_S | 11.6 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-IQ2_M.gguf) | i1-IQ2_M | 12.5 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-IQ3_S.gguf) | i1-IQ3_S | 15.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-IQ3_M.gguf) | i1-IQ3_M | 16.2 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.1 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-Q4_0.gguf) | i1-Q4_0 | 20.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.0 | | | [GGUF](https://huggingface.co/mradermacher/bagel-dpo-34b-v0.5-i1-GGUF/resolve/main/bagel-dpo-34b-v0.5.i1-Q6_K.gguf) | i1-Q6_K | 28.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Pearl-34B-ties-GGUF
mradermacher
2024-05-06T05:21:42Z
19
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "jondurbin/bagel-dpo-34b-v0.2", "abacusai/MetaMath-Bagel-DPO-34B", "en", "base_model:louisbrulenaudet/Pearl-34B-ties", "base_model:quantized:louisbrulenaudet/Pearl-34B-ties", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-03T01:27:03Z
--- base_model: louisbrulenaudet/Pearl-34B-ties language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - jondurbin/bagel-dpo-34b-v0.2 - abacusai/MetaMath-Bagel-DPO-34B --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/louisbrulenaudet/Pearl-34B-ties <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Pearl-34B-ties-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q2_K.gguf) | Q2_K | 13.5 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.IQ3_XS.gguf) | IQ3_XS | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q3_K_S.gguf) | Q3_K_S | 15.6 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.IQ3_S.gguf) | IQ3_S | 15.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.IQ3_M.gguf) | IQ3_M | 16.2 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q3_K_M.gguf) | Q3_K_M | 17.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q3_K_L.gguf) | Q3_K_L | 18.8 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.IQ4_XS.gguf) | IQ4_XS | 19.3 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q4_K_S.gguf) | Q4_K_S | 20.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q4_K_M.gguf) | Q4_K_M | 21.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q5_K_S.gguf) | Q5_K_S | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q5_K_M.gguf) | Q5_K_M | 25.0 | | | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q6_K.gguf) | Q6_K | 28.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Pearl-34B-ties-GGUF/resolve/main/Pearl-34B-ties.Q8_0.gguf) | Q8_0 | 37.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF
mradermacher
2024-05-06T05:21:37Z
18
1
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-03T02:30:42Z
--- base_model: LeroyDyer/Mixtral_AI_Cyber_5.0_SFT language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_5.0_SFT <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Cyber_5.0_SFT-GGUF/resolve/main/Mixtral_AI_Cyber_5.0_SFT.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Wespeaker/wespeaker-cnceleb-resnet34
Wespeaker
2024-05-06T05:21:36Z
2
1
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2024-05-06T05:14:50Z
--- license: apache-2.0 ---
mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF
mradermacher
2024-05-06T05:21:23Z
99
2
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "nasiruddin15/Mistral-grok-instract-2-7B-slerp", "cognitivecomputations/dolphin-2.8-mistral-7b-v02", "en", "base_model:nasiruddin15/Mistral-dolphin-2.8-grok-instract-2-7B-slerp", "base_model:quantized:nasiruddin15/Mistral-dolphin-2.8-grok-instract-2-7B-slerp", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-03T05:36:56Z
--- base_model: nasiruddin15/Mistral-dolphin-2.8-grok-instract-2-7B-slerp language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - nasiruddin15/Mistral-grok-instract-2-7B-slerp - cognitivecomputations/dolphin-2.8-mistral-7b-v02 --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/nasiruddin15/Mistral-dolphin-2.8-grok-instract-2-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-dolphin-2.8-grok-instract-2-7B-slerp-GGUF/resolve/main/Mistral-dolphin-2.8-grok-instract-2-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MultiVerse_70B-i1-GGUF
mradermacher
2024-05-06T05:21:09Z
35
0
transformers
[ "transformers", "gguf", "en", "base_model:MTSAIR/MultiVerse_70B", "base_model:quantized:MTSAIR/MultiVerse_70B", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-03T08:54:49Z
--- base_model: MTSAIR/MultiVerse_70B language: - en library_name: transformers license: other license_link: https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE license_name: qwen quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/MTSAIR/MultiVerse_70B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/MultiVerse_70B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ1_S.gguf) | i1-IQ1_S | 18.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ1_M.gguf) | i1-IQ1_M | 19.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 23.5 | | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ2_S.gguf) | i1-IQ2_S | 25.1 | | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ2_M.gguf) | i1-IQ2_M | 26.9 | | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q2_K.gguf) | i1-Q2_K | 28.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 29.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 31.5 | | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ3_S.gguf) | i1-IQ3_S | 33.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 33.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ3_M.gguf) | i1-IQ3_M | 34.8 | | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 36.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 40.1 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 40.4 | | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q4_0.gguf) | i1-Q4_0 | 42.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 42.9 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 45.3 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | | | [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 52.9 | | | [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF/resolve/main/MultiVerse_70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 60.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Synatra-7B-v0.3-RP-GGUF
mradermacher
2024-05-06T05:21:06Z
17
1
transformers
[ "transformers", "gguf", "ko", "base_model:maywell/Synatra-7B-v0.3-RP", "base_model:quantized:maywell/Synatra-7B-v0.3-RP", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-03T09:51:38Z
--- base_model: maywell/Synatra-7B-v0.3-RP language: - ko library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/maywell/Synatra-7B-v0.3-RP <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Synatra-7B-v0.3-RP-GGUF/resolve/main/Synatra-7B-v0.3-RP.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/pandafish-7b-GGUF
mradermacher
2024-05-06T05:20:56Z
9
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:ichigoberry/pandafish-7b", "base_model:quantized:ichigoberry/pandafish-7b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-03T12:13:43Z
--- base_model: ichigoberry/pandafish-7b language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ichigoberry/pandafish-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-7b-GGUF/resolve/main/pandafish-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Eurus-7b-sft-GGUF
mradermacher
2024-05-06T05:20:27Z
130
0
transformers
[ "transformers", "gguf", "reasoning", "en", "dataset:openbmb/UltraInteract", "dataset:stingning/ultrachat", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:Open-Orca/OpenOrca", "base_model:pharaouk/Eurus-7b-sft", "base_model:quantized:pharaouk/Eurus-7b-sft", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-03T16:05:16Z
--- base_model: pharaouk/Eurus-7b-sft datasets: - openbmb/UltraInteract - stingning/ultrachat - openchat/openchat_sharegpt4_dataset - Open-Orca/OpenOrca language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - reasoning --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/pharaouk/Eurus-7b-sft <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Eurus-7b-sft-GGUF/resolve/main/Eurus-7b-sft.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/HeatherSpell-7b-GGUF
mradermacher
2024-05-06T05:20:25Z
34
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "yam-peleg/Experiment26-7B", "Kukedlc/NeuralExperiment-7b-MagicCoder-v7.5", "en", "base_model:MysticFoxMagic/HeatherSpell-7b", "base_model:quantized:MysticFoxMagic/HeatherSpell-7b", "endpoints_compatible", "region:us" ]
null
2024-04-03T16:44:58Z
--- base_model: MysticFoxMagic/HeatherSpell-7b language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - yam-peleg/Experiment26-7B - Kukedlc/NeuralExperiment-7b-MagicCoder-v7.5 --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/MysticFoxMagic/HeatherSpell-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/HeatherSpell-7b-GGUF/resolve/main/HeatherSpell-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/KittyNyanster-v1-GGUF
mradermacher
2024-05-06T05:20:17Z
191
2
transformers
[ "transformers", "gguf", "roleplay", "chat", "mistral", "en", "base_model:arlineka/KittyNyanster-v1", "base_model:quantized:arlineka/KittyNyanster-v1", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-03T18:30:14Z
--- base_model: arlineka/KittyNyanster-v1 language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - roleplay - chat - mistral --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/arlineka/KittyNyanster-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/KittyNyanster-v1-GGUF/resolve/main/KittyNyanster-v1.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/mistral-7b-medical-assistance-GGUF
mradermacher
2024-05-06T05:20:12Z
17
1
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "en", "base_model:Hdhsjfjdsj/mistral-7b-medical-assistance", "base_model:quantized:Hdhsjfjdsj/mistral-7b-medical-assistance", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-03T18:55:27Z
--- base_model: Hdhsjfjdsj/mistral-7b-medical-assistance language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Hdhsjfjdsj/mistral-7b-medical-assistance <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/mistral-7b-medical-assistance-GGUF/resolve/main/mistral-7b-medical-assistance.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/pandafish-dt-7b-GGUF
mradermacher
2024-05-06T05:20:10Z
64
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "CultriX/MergeCeption-7B-v3", "en", "base_model:ichigoberry/pandafish-dt-7b", "base_model:quantized:ichigoberry/pandafish-dt-7b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-03T19:03:45Z
--- base_model: ichigoberry/pandafish-dt-7b language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - CultriX/MergeCeption-7B-v3 --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ichigoberry/pandafish-dt-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF/resolve/main/pandafish-dt-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/KunoichiVerse-7B-GGUF
mradermacher
2024-05-06T05:19:51Z
28
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:Ppoyaa/KunoichiVerse-7B", "base_model:quantized:Ppoyaa/KunoichiVerse-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-03T21:48:59Z
--- base_model: Ppoyaa/KunoichiVerse-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Ppoyaa/KunoichiVerse-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/KunoichiVerse-7B-GGUF/resolve/main/KunoichiVerse-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/bagel-20b-v04-GGUF
mradermacher
2024-05-06T05:19:46Z
65
1
transformers
[ "transformers", "gguf", "en", "dataset:ai2_arc", "dataset:allenai/ultrafeedback_binarized_cleaned", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "dataset:jondurbin/airoboros-3.2", "dataset:codeparrot/apps", "dataset:facebook/belebele", "dataset:bluemoon-fandom-1-1-rp-cleaned", "dataset:boolq", "dataset:camel-ai/biology", "dataset:camel-ai/chemistry", "dataset:camel-ai/math", "dataset:camel-ai/physics", "dataset:jondurbin/contextual-dpo-v0.1", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:jondurbin/py-dpo-v0.1", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:LDJnr/Capybara", "dataset:jondurbin/cinematika-v0.1", "dataset:WizardLM/WizardLM_evol_instruct_70k", "dataset:glaiveai/glaive-function-calling-v2", "dataset:grimulkan/LimaRP-augmented", "dataset:lmsys/lmsys-chat-1m", "dataset:ParisNeo/lollms_aware_dataset", "dataset:TIGER-Lab/MathInstruct", "dataset:Muennighoff/natural-instructions", "dataset:openbookqa", "dataset:kingbri/PIPPA-shareGPT", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:ropes", "dataset:cakiki/rosetta-code", "dataset:Open-Orca/SlimOrca", "dataset:b-mc2/sql-create-context", "dataset:squad_v2", "dataset:mattpscott/airoboros-summarization", "dataset:migtissera/Synthia-v1.3", "dataset:unalignment/toxic-dpo-v0.2", "dataset:WhiteRabbitNeo/WRN-Chapter-1", "dataset:WhiteRabbitNeo/WRN-Chapter-2", "dataset:winogrande", "base_model:jondurbin/bagel-20b-v04", "base_model:quantized:jondurbin/bagel-20b-v04", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-03T22:08:35Z
--- base_model: jondurbin/bagel-20b-v04 datasets: - ai2_arc - allenai/ultrafeedback_binarized_cleaned - argilla/distilabel-intel-orca-dpo-pairs - jondurbin/airoboros-3.2 - codeparrot/apps - facebook/belebele - bluemoon-fandom-1-1-rp-cleaned - boolq - camel-ai/biology - camel-ai/chemistry - camel-ai/math - camel-ai/physics - jondurbin/contextual-dpo-v0.1 - jondurbin/gutenberg-dpo-v0.1 - jondurbin/py-dpo-v0.1 - jondurbin/truthy-dpo-v0.1 - LDJnr/Capybara - jondurbin/cinematika-v0.1 - WizardLM/WizardLM_evol_instruct_70k - glaiveai/glaive-function-calling-v2 - jondurbin/gutenberg-dpo-v0.1 - grimulkan/LimaRP-augmented - lmsys/lmsys-chat-1m - ParisNeo/lollms_aware_dataset - TIGER-Lab/MathInstruct - Muennighoff/natural-instructions - openbookqa - kingbri/PIPPA-shareGPT - piqa - Vezora/Tested-22k-Python-Alpaca - ropes - cakiki/rosetta-code - Open-Orca/SlimOrca - b-mc2/sql-create-context - squad_v2 - mattpscott/airoboros-summarization - migtissera/Synthia-v1.3 - unalignment/toxic-dpo-v0.2 - WhiteRabbitNeo/WRN-Chapter-1 - WhiteRabbitNeo/WRN-Chapter-2 - winogrande language: - en library_name: transformers license: other license_link: https://huggingface.co/internlm/internlm2-20b#open-source-license license_name: internlm2-20b quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jondurbin/bagel-20b-v04 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q2_K.gguf) | Q2_K | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.IQ3_XS.gguf) | IQ3_XS | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q3_K_S.gguf) | Q3_K_S | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.IQ3_S.gguf) | IQ3_S | 9.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.IQ3_M.gguf) | IQ3_M | 9.9 | | | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q3_K_M.gguf) | Q3_K_M | 10.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q3_K_L.gguf) | Q3_K_L | 11.3 | | | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.IQ4_XS.gguf) | IQ4_XS | 11.6 | | | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q4_K_S.gguf) | Q4_K_S | 12.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q4_K_M.gguf) | Q4_K_M | 12.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q5_K_S.gguf) | Q5_K_S | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q5_K_M.gguf) | Q5_K_M | 14.8 | | | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q6_K.gguf) | Q6_K | 17.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/bagel-20b-v04-GGUF/resolve/main/bagel-20b-v04.Q8_0.gguf) | Q8_0 | 21.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Wespeaker/wespeaker-cnceleb-resnet34-LM
Wespeaker
2024-05-06T05:19:40Z
4
3
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2024-05-06T05:15:14Z
--- license: apache-2.0 ---
mradermacher/StarMonarch-7B-GGUF
mradermacher
2024-05-06T05:19:34Z
71
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:Ppoyaa/StarMonarch-7B", "base_model:quantized:Ppoyaa/StarMonarch-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T00:13:47Z
--- base_model: Ppoyaa/StarMonarch-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Ppoyaa/StarMonarch-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/StarMonarch-7B-GGUF/resolve/main/StarMonarch-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Qwen1.5-7B-Translator-GGUF
mradermacher
2024-05-06T05:19:32Z
5
0
transformers
[ "transformers", "gguf", "en", "base_model:DeyangKong/Qwen1.5-7B-Translator", "base_model:quantized:DeyangKong/Qwen1.5-7B-Translator", "license:gpl-3.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T00:24:50Z
--- base_model: DeyangKong/Qwen1.5-7B-Translator language: - en library_name: transformers license: gpl-3.0 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/DeyangKong/Qwen1.5-7B-Translator <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.Q2_K.gguf) | Q2_K | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.IQ3_XS.gguf) | IQ3_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.IQ3_S.gguf) | IQ3_S | 4.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.Q3_K_S.gguf) | Q3_K_S | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.IQ3_M.gguf) | IQ3_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.Q3_K_M.gguf) | Q3_K_M | 4.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.Q3_K_L.gguf) | Q3_K_L | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.IQ4_XS.gguf) | IQ4_XS | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.Q4_K_S.gguf) | Q4_K_S | 5.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.Q4_K_M.gguf) | Q4_K_M | 5.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.Q5_K_S.gguf) | Q5_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.Q5_K_M.gguf) | Q5_K_M | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.Q6_K.gguf) | Q6_K | 7.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-Translator-GGUF/resolve/main/Qwen1.5-7B-Translator.Q8_0.gguf) | Q8_0 | 8.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Athena-v4-i1-GGUF
mradermacher
2024-05-06T05:19:17Z
117
1
transformers
[ "transformers", "gguf", "en", "base_model:IkariDev/Athena-v4", "base_model:quantized:IkariDev/Athena-v4", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-04T01:14:19Z
--- base_model: IkariDev/Athena-v4 language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/IkariDev/Athena-v4 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Athena-v4-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-IQ1_S.gguf) | i1-IQ1_S | 3.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-IQ1_M.gguf) | i1-IQ1_M | 3.5 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-IQ2_S.gguf) | i1-IQ2_S | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-IQ2_M.gguf) | i1-IQ2_M | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-Q2_K.gguf) | i1-Q2_K | 5.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-IQ3_S.gguf) | i1-IQ3_S | 6.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-IQ3_M.gguf) | i1-IQ3_M | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-Q4_0.gguf) | i1-Q4_0 | 7.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/Athena-v4-i1-GGUF/resolve/main/Athena-v4.i1-Q6_K.gguf) | i1-Q6_K | 11.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/AuroraRP-8x7B-GGUF
mradermacher
2024-05-06T05:18:57Z
26
1
transformers
[ "transformers", "gguf", "roleplay", "rp", "mergekit", "merge", "en", "endpoints_compatible", "region:us" ]
null
2024-04-04T04:00:24Z
--- base_model: Fredithefish/AuroraRP-8x7B language: - en library_name: transformers quantized_by: mradermacher tags: - roleplay - rp - mergekit - merge --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Fredithefish/AuroraRP-8x7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/AuroraRP-8x7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q2_K.gguf) | Q2_K | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.IQ3_XS.gguf) | IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.IQ3_S.gguf) | IQ3_S | 20.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.7 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.IQ3_M.gguf) | IQ3_M | 21.7 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q4_K_S.gguf) | Q4_K_S | 27.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.5 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.5 | | | [GGUF](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q6_K.gguf) | Q6_K | 38.6 | very good quality | | [PART 1](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/AuroraRP-8x7B-GGUF/resolve/main/AuroraRP-8x7B.Q8_0.gguf.part2of2) | Q8_0 | 49.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MeliodasT3qm7-7B-GGUF
mradermacher
2024-05-06T05:18:47Z
11
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "automerger", "en", "base_model:automerger/MeliodasT3qm7-7B", "base_model:quantized:automerger/MeliodasT3qm7-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-04T05:21:38Z
--- base_model: automerger/MeliodasT3qm7-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - automerger --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/automerger/MeliodasT3qm7-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7-7B-GGUF/resolve/main/MeliodasT3qm7-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/IsenHumourAI-GGUF
mradermacher
2024-05-06T05:18:38Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "en", "base_model:jberni29/IsenHumourAI", "base_model:quantized:jberni29/IsenHumourAI", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-04T06:46:50Z
--- base_model: jberni29/IsenHumourAI language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jberni29/IsenHumourAI <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/IsenHumourAI-GGUF/resolve/main/IsenHumourAI.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MistarlingMaid-2x7B-base-GGUF
mradermacher
2024-05-06T05:18:27Z
74
0
transformers
[ "transformers", "gguf", "en", "base_model:dawn17/MistarlingMaid-2x7B-base", "base_model:quantized:dawn17/MistarlingMaid-2x7B-base", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T08:20:09Z
--- base_model: dawn17/MistarlingMaid-2x7B-base language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/dawn17/MistarlingMaid-2x7B-base <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.IQ3_XS.gguf) | IQ3_XS | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.IQ3_M.gguf) | IQ3_M | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.IQ4_XS.gguf) | IQ4_XS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q5_K_M.gguf) | Q5_K_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MistarlingMaid-2x7B-base-GGUF/resolve/main/MistarlingMaid-2x7B-base.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/UNAversal-8x7B-v1beta-i1-GGUF
mradermacher
2024-05-06T05:18:09Z
61
1
transformers
[ "transformers", "gguf", "UNA", "juanako", "mixtral", "MoE", "en", "base_model:fblgit/UNAversal-8x7B-v1beta", "base_model:quantized:fblgit/UNAversal-8x7B-v1beta", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T09:42:55Z
--- base_model: fblgit/UNAversal-8x7B-v1beta language: - en library_name: transformers license: cc-by-nc-sa-4.0 quantized_by: mradermacher tags: - UNA - juanako - mixtral - MoE --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/fblgit/UNAversal-8x7B-v1beta <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ1_S.gguf) | i1-IQ1_S | 10.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ1_M.gguf) | i1-IQ1_M | 11.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.8 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ2_S.gguf) | i1-IQ2_S | 14.4 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ2_M.gguf) | i1-IQ2_M | 15.8 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q2_K.gguf) | i1-Q2_K | 17.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ3_S.gguf) | i1-IQ3_S | 20.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ3_M.gguf) | i1-IQ3_M | 21.7 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.3 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q4_0.gguf) | i1-Q4_0 | 26.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q4_K_S.gguf) | i1-Q4_K_S | 27.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.5 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.5 | | | [GGUF](https://huggingface.co/mradermacher/UNAversal-8x7B-v1beta-i1-GGUF/resolve/main/UNAversal-8x7B-v1beta.i1-Q6_K.gguf) | i1-Q6_K | 38.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF
mradermacher
2024-05-06T05:17:59Z
1
0
transformers
[ "transformers", "gguf", "en", "base_model:ddh0/Mistral-10.7B-Instruct-v0.2", "base_model:quantized:ddh0/Mistral-10.7B-Instruct-v0.2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-04T12:08:09Z
--- base_model: ddh0/Mistral-10.7B-Instruct-v0.2 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ddh0/Mistral-10.7B-Instruct-v0.2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q2_K.gguf) | Q2_K | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.IQ3_XS.gguf) | IQ3_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q3_K_S.gguf) | Q3_K_S | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.IQ3_S.gguf) | IQ3_S | 4.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.IQ3_M.gguf) | IQ3_M | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q3_K_L.gguf) | Q3_K_L | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.IQ4_XS.gguf) | IQ4_XS | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q4_K_S.gguf) | Q4_K_S | 6.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q4_K_M.gguf) | Q4_K_M | 6.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q5_K_S.gguf) | Q5_K_S | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q5_K_M.gguf) | Q5_K_M | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q6_K.gguf) | Q6_K | 9.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-10.7B-Instruct-v0.2-GGUF/resolve/main/Mistral-10.7B-Instruct-v0.2.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MermaidMixtral-2x6.5b-GGUF
mradermacher
2024-05-06T05:17:28Z
90
0
transformers
[ "transformers", "gguf", "en", "base_model:TroyDoesAI/MermaidMixtral-2x6.5b", "base_model:quantized:TroyDoesAI/MermaidMixtral-2x6.5b", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-04T15:36:55Z
--- base_model: TroyDoesAI/MermaidMixtral-2x6.5b language: - en library_name: transformers license: cc-by-4.0 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TroyDoesAI/MermaidMixtral-2x6.5b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.Q2_K.gguf) | Q2_K | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.IQ3_XS.gguf) | IQ3_XS | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.Q3_K_S.gguf) | Q3_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.IQ3_S.gguf) | IQ3_S | 5.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.IQ3_M.gguf) | IQ3_M | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.Q3_K_L.gguf) | Q3_K_L | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.IQ4_XS.gguf) | IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.Q4_K_S.gguf) | Q4_K_S | 7.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.Q4_K_M.gguf) | Q4_K_M | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-2x6.5b-GGUF/resolve/main/MermaidMixtral-2x6.5b.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Twizzler-7B-GGUF
mradermacher
2024-05-06T05:17:26Z
69
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:son-of-man/Twizzler-7B", "base_model:quantized:son-of-man/Twizzler-7B", "endpoints_compatible", "region:us" ]
null
2024-04-04T15:44:57Z
--- base_model: son-of-man/Twizzler-7B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/son-of-man/Twizzler-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Twizzler-7B-GGUF/resolve/main/Twizzler-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mermaid_11.5B-GGUF
mradermacher
2024-05-06T05:17:13Z
19
0
transformers
[ "transformers", "gguf", "en", "base_model:TroyDoesAI/Mermaid_11.5B", "base_model:quantized:TroyDoesAI/Mermaid_11.5B", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-04T19:16:21Z
--- base_model: TroyDoesAI/Mermaid_11.5B language: - en library_name: transformers license: cc-by-4.0 quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TroyDoesAI/Mermaid_11.5B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q2_K.gguf) | Q2_K | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.IQ3_XS.gguf) | IQ3_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q3_K_S.gguf) | Q3_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.IQ3_S.gguf) | IQ3_S | 5.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.IQ3_M.gguf) | IQ3_M | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q3_K_L.gguf) | Q3_K_L | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.IQ4_XS.gguf) | IQ4_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q4_K_S.gguf) | Q4_K_S | 7.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q4_K_M.gguf) | Q4_K_M | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q5_K_S.gguf) | Q5_K_S | 8.4 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q5_K_M.gguf) | Q5_K_M | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q6_K.gguf) | Q6_K | 9.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q8_0.gguf) | Q8_0 | 12.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Swallow-70b-RP-GGUF
mradermacher
2024-05-06T05:17:10Z
72
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "ja", "base_model:nitky/Swallow-70b-RP", "base_model:quantized:nitky/Swallow-70b-RP", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-04T20:51:26Z
--- base_model: nitky/Swallow-70b-RP language: - en - ja library_name: transformers license: llama2 model_type: llama quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/nitky/Swallow-70b-RP <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q2_K.gguf) | Q2_K | 25.7 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.IQ3_XS.gguf) | IQ3_XS | 28.5 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.IQ3_S.gguf) | IQ3_S | 30.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q3_K_S.gguf) | Q3_K_S | 30.1 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.IQ3_M.gguf) | IQ3_M | 31.2 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q3_K_M.gguf) | Q3_K_M | 33.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q3_K_L.gguf) | Q3_K_L | 36.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.IQ4_XS.gguf) | IQ4_XS | 37.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q4_K_S.gguf) | Q4_K_S | 39.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q4_K_M.gguf) | Q4_K_M | 41.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q5_K_S.gguf) | Q5_K_S | 47.7 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q5_K_M.gguf) | Q5_K_M | 49.0 | | | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q6_K.gguf.part2of2) | Q6_K | 56.8 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Alpacino13b-GGUF
mradermacher
2024-05-06T05:17:07Z
31
0
transformers
[ "transformers", "gguf", "alpaca", "en", "base_model:digitous/Alpacino13b", "base_model:quantized:digitous/Alpacino13b", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-04T20:54:42Z
--- base_model: digitous/Alpacino13b language: - en library_name: transformers license: other quantized_by: mradermacher tags: - alpaca --- ## About <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/digitous/Alpacino13b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.IQ3_M.gguf) | IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF
mradermacher
2024-05-06T05:16:57Z
2
0
transformers
[ "transformers", "gguf", "en", "base_model:TroyDoesAI/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic", "base_model:quantized:TroyDoesAI/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-04T22:43:02Z
--- base_model: TroyDoesAI/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic language: - en library_name: transformers license: cc-by-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TroyDoesAI/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q2_K.gguf) | Q2_K | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.IQ3_XS.gguf) | IQ3_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q3_K_S.gguf) | Q3_K_S | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.IQ3_M.gguf) | IQ3_M | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q3_K_L.gguf) | Q3_K_L | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.IQ4_XS.gguf) | IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q5_K_S.gguf) | Q5_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q5_K_M.gguf) | Q5_K_M | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q6_K.gguf) | Q6_K | 7.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic-GGUF/resolve/main/Mermaid_Yi-9B_Factual_Temps_Full_Synthetic.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF
mradermacher
2024-05-06T05:16:52Z
37
0
transformers
[ "transformers", "gguf", "merge", "en", "base_model:brucethemoose/Capybara-Tess-Yi-34B-200K", "base_model:quantized:brucethemoose/Capybara-Tess-Yi-34B-200K", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-05T00:09:20Z
--- base_model: brucethemoose/Capybara-Tess-Yi-34B-200K language: - en library_name: transformers license: other license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE license_name: yi-license quantized_by: mradermacher tags: - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/NeuralNinja-2x-7B-GGUF
mradermacher
2024-05-06T05:16:45Z
2
0
transformers
[ "transformers", "gguf", "en", "base_model:Muhammad2003/NeuralNinja-2x-7B", "base_model:quantized:Muhammad2003/NeuralNinja-2x-7B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T00:55:22Z
--- base_model: Muhammad2003/NeuralNinja-2x-7B language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Muhammad2003/NeuralNinja-2x-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.IQ3_XS.gguf) | IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q3_K_S.gguf) | Q3_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.IQ3_M.gguf) | IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q3_K_L.gguf) | Q3_K_L | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q5_K_S.gguf) | Q5_K_S | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q5_K_M.gguf) | Q5_K_M | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q6_K.gguf) | Q6_K | 10.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NeuralNinja-2x-7B-GGUF/resolve/main/NeuralNinja-2x-7B.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/hydra-moe-120b-GGUF
mradermacher
2024-05-06T05:16:38Z
19
0
transformers
[ "transformers", "gguf", "moe", "moerge", "en", "base_model:ibivibiv/hydra-moe-120b", "base_model:quantized:ibivibiv/hydra-moe-120b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T01:49:53Z
--- base_model: ibivibiv/hydra-moe-120b language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - moe - moerge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ibivibiv/hydra-moe-120b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/hydra-moe-120b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q2_K.gguf) | Q2_K | 41.6 | | | [GGUF](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.IQ3_XS.gguf) | IQ3_XS | 46.5 | | | [GGUF](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q3_K_S.gguf) | Q3_K_S | 49.1 | | | [GGUF](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.IQ3_S.gguf) | IQ3_S | 49.2 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.IQ3_M.gguf.part2of2) | IQ3_M | 50.1 | | | [PART 1](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q3_K_M.gguf.part2of2) | Q3_K_M | 54.5 | lower quality | | [PART 1](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q3_K_L.gguf.part2of2) | Q3_K_L | 59.1 | | | [PART 1](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.IQ4_XS.gguf.part2of2) | IQ4_XS | 61.3 | | | [PART 1](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q4_K_S.gguf.part2of2) | Q4_K_S | 64.7 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q4_K_M.gguf.part2of2) | Q4_K_M | 68.8 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q5_K_S.gguf.part2of2) | Q5_K_S | 78.3 | | | [PART 1](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q5_K_M.gguf.part2of2) | Q5_K_M | 80.7 | | | [PART 1](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q6_K.gguf.part2of2) | Q6_K | 93.3 | very good quality | | [PART 1](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/hydra-moe-120b-GGUF/resolve/main/hydra-moe-120b.Q8_0.gguf.part3of3) | Q8_0 | 120.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/WizardLM-30B-V1.0-GGUF
mradermacher
2024-05-06T05:16:35Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:WizardLM/WizardLM-30B-V1.0", "base_model:quantized:WizardLM/WizardLM-30B-V1.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T01:57:20Z
--- base_model: WizardLM/WizardLM-30B-V1.0 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/WizardLM/WizardLM-30B-V1.0 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/WizardLM-30B-V1.0-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q2_K.gguf) | Q2_K | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.IQ3_XS.gguf) | IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q3_K_S.gguf) | Q3_K_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.IQ3_M.gguf) | IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q3_K_L.gguf) | Q3_K_L | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.IQ4_XS.gguf) | IQ4_XS | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q5_K_S.gguf) | Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q5_K_M.gguf) | Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q6_K.gguf) | Q6_K | 26.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-V1.0-GGUF/resolve/main/WizardLM-30B-V1.0.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Irene-RP-v5-7B-GGUF
mradermacher
2024-05-06T05:16:24Z
1
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "mistral", "roleplay", "en", "base_model:Virt-io/Irene-RP-v5-7B", "base_model:quantized:Virt-io/Irene-RP-v5-7B", "endpoints_compatible", "region:us" ]
null
2024-04-05T02:29:13Z
--- base_model: Virt-io/Irene-RP-v5-7B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge - mistral - roleplay --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Virt-io/Irene-RP-v5-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Irene-RP-v5-7B-GGUF/resolve/main/Irene-RP-v5-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Aguie-chat_v0.1-GGUF
mradermacher
2024-05-06T05:16:21Z
91
0
transformers
[ "transformers", "gguf", "ko", "en", "base_model:Heoni/Aguie-chat_v0.1", "base_model:quantized:Heoni/Aguie-chat_v0.1", "license:cc-by-nc-nd-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T03:21:10Z
--- base_model: Heoni/Aguie-chat_v0.1 language: - ko - en library_name: transformers license: cc-by-nc-nd-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Heoni/Aguie-chat_v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.Q2_K.gguf) | Q2_K | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.IQ3_XS.gguf) | IQ3_XS | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.Q3_K_S.gguf) | Q3_K_S | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.IQ3_M.gguf) | IQ3_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.Q3_K_L.gguf) | Q3_K_L | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.IQ4_XS.gguf) | IQ4_XS | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.Q4_K_S.gguf) | Q4_K_S | 7.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.Q4_K_M.gguf) | Q4_K_M | 8.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.Q5_K_S.gguf) | Q5_K_S | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.Q5_K_M.gguf) | Q5_K_M | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.Q6_K.gguf) | Q6_K | 11.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Aguie-chat_v0.1-GGUF/resolve/main/Aguie-chat_v0.1.Q8_0.gguf) | Q8_0 | 14.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mistral-7b-V0.3-ReAct-GGUF
mradermacher
2024-05-06T05:16:16Z
51
2
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T03:43:30Z
--- base_model: Maverick17/Mistral-7b-V0.3-ReAct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Maverick17/Mistral-7b-V0.3-ReAct <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7b-V0.3-ReAct-GGUF/resolve/main/Mistral-7b-V0.3-ReAct.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Aguie_v0.1-GGUF
mradermacher
2024-05-06T05:16:03Z
36
0
transformers
[ "transformers", "gguf", "ko", "en", "base_model:Heoni/Aguie_v0.1", "base_model:quantized:Heoni/Aguie_v0.1", "license:cc-by-nc-nd-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T05:01:17Z
--- base_model: Heoni/Aguie_v0.1 language: - ko - en library_name: transformers license: cc-by-nc-nd-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Heoni/Aguie_v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q2_K.gguf) | Q2_K | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.IQ3_XS.gguf) | IQ3_XS | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q3_K_S.gguf) | Q3_K_S | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.IQ3_M.gguf) | IQ3_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q3_K_L.gguf) | Q3_K_L | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.IQ4_XS.gguf) | IQ4_XS | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q4_K_S.gguf) | Q4_K_S | 7.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q4_K_M.gguf) | Q4_K_M | 8.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q5_K_S.gguf) | Q5_K_S | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q5_K_M.gguf) | Q5_K_M | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q6_K.gguf) | Q6_K | 11.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q8_0.gguf) | Q8_0 | 14.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/TableLLM-7b-GGUF
mradermacher
2024-05-06T05:16:01Z
146
0
transformers
[ "transformers", "gguf", "Table", "QA", "Code", "en", "dataset:RUCKBReasoning/TableLLM-SFT", "base_model:RUCKBReasoning/TableLLM-7b", "base_model:quantized:RUCKBReasoning/TableLLM-7b", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-05T05:06:18Z
--- base_model: RUCKBReasoning/TableLLM-7b datasets: - RUCKBReasoning/TableLLM-SFT language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - Table - QA - Code --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/RUCKBReasoning/TableLLM-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.IQ3_XS.gguf) | IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.IQ3_M.gguf) | IQ3_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/bophades-v2-mistral-7B-GGUF
mradermacher
2024-05-06T05:15:51Z
40
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:nbeerbower/bophades-v2-mistral-7B", "base_model:quantized:nbeerbower/bophades-v2-mistral-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T06:26:40Z
--- base_model: nbeerbower/bophades-v2-mistral-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/nbeerbower/bophades-v2-mistral-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/bophades-v2-mistral-7B-GGUF/resolve/main/bophades-v2-mistral-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/TableLLM-13b-GGUF
mradermacher
2024-05-06T05:15:43Z
129
0
transformers
[ "transformers", "gguf", "Table", "QA", "Code", "en", "dataset:RUCKBReasoning/TableLLM-SFT", "base_model:RUCKBReasoning/TableLLM-13b", "base_model:quantized:RUCKBReasoning/TableLLM-13b", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-05T06:37:28Z
--- base_model: RUCKBReasoning/TableLLM-13b datasets: - RUCKBReasoning/TableLLM-SFT language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - Table - QA - Code --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/RUCKBReasoning/TableLLM-13b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.IQ3_M.gguf) | IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TableLLM-13b-GGUF/resolve/main/TableLLM-13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Swallow-7b-hf-CodeSkill-GGUF
mradermacher
2024-05-06T05:15:31Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:HachiML/Swallow-7b-hf-CodeSkill", "base_model:quantized:HachiML/Swallow-7b-hf-CodeSkill", "endpoints_compatible", "region:us" ]
null
2024-04-05T08:12:09Z
--- base_model: HachiML/Swallow-7b-hf-CodeSkill language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/HachiML/Swallow-7b-hf-CodeSkill <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.Q2_K.gguf) | Q2_K | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.IQ3_XS.gguf) | IQ3_XS | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.IQ3_S.gguf) | IQ3_S | 3.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.Q3_K_S.gguf) | Q3_K_S | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.IQ3_M.gguf) | IQ3_M | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.Q3_K_L.gguf) | Q3_K_L | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.IQ4_XS.gguf) | IQ4_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.Q5_K_M.gguf) | Q5_K_M | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.Q6_K.gguf) | Q6_K | 5.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-7b-hf-CodeSkill-GGUF/resolve/main/Swallow-7b-hf-CodeSkill.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF
mradermacher
2024-05-06T05:15:23Z
2
0
transformers
[ "transformers", "gguf", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T09:02:52Z
--- base_model: ParkTaeEon/Myrrh_solar_10.7b_v0.1-dpo language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ParkTaeEon/Myrrh_solar_10.7b_v0.1-dpo <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Myrrh_solar_10.7b_v0.1-GGUF
mradermacher
2024-05-06T05:14:59Z
0
0
transformers
[ "transformers", "gguf", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T11:43:30Z
--- base_model: ParkTaeEon/Myrrh_solar_10.7b_v0.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ParkTaeEon/Myrrh_solar_10.7b_v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF
mradermacher
2024-05-06T05:14:57Z
4
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "ja", "license:llama2", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-05T12:00:20Z
--- base_model: Aratako/Superkarakuri-lm-chat-70b-v0.1 language: - ja library_name: transformers license: llama2 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Aratako/Superkarakuri-lm-chat-70b-v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q2_K.gguf) | Q2_K | 25.7 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.IQ3_XS.gguf) | IQ3_XS | 28.6 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.IQ3_S.gguf) | IQ3_S | 30.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q3_K_S.gguf) | Q3_K_S | 30.2 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.IQ3_M.gguf) | IQ3_M | 31.2 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q3_K_M.gguf) | Q3_K_M | 33.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q3_K_L.gguf) | Q3_K_L | 36.4 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.IQ4_XS.gguf) | IQ4_XS | 37.4 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q4_K_S.gguf) | Q4_K_S | 39.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q4_K_M.gguf) | Q4_K_M | 41.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q5_K_S.gguf) | Q5_K_S | 47.7 | | | [GGUF](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q5_K_M.gguf) | Q5_K_M | 49.0 | | | [PART 1](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q6_K.gguf.part2of2) | Q6_K | 56.9 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Superkarakuri-lm-chat-70b-v0.1-GGUF/resolve/main/Superkarakuri-lm-chat-70b-v0.1.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/pandafish-2-7b-32k-GGUF
mradermacher
2024-05-06T05:14:48Z
16
5
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "cognitivecomputations/dolphin-2.8-mistral-7b-v02", "en", "base_model:ichigoberry/pandafish-2-7b-32k", "base_model:quantized:ichigoberry/pandafish-2-7b-32k", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T14:05:35Z
--- base_model: ichigoberry/pandafish-2-7b-32k language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - cognitivecomputations/dolphin-2.8-mistral-7b-v02 --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ichigoberry/pandafish-2-7b-32k <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF/resolve/main/pandafish-2-7b-32k.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF
mradermacher
2024-05-06T05:14:46Z
107
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Undi95/Mistral-ClaudeLimaRP-v3-7B", "SanjiWatsuki/Silicon-Maid-7B", "en", "base_model:akrads/ClaudeLimaRP-Maid-10.7B", "base_model:quantized:akrads/ClaudeLimaRP-Maid-10.7B", "endpoints_compatible", "region:us" ]
null
2024-04-05T14:16:23Z
--- base_model: akrads/ClaudeLimaRP-Maid-10.7B language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - Undi95/Mistral-ClaudeLimaRP-v3-7B - SanjiWatsuki/Silicon-Maid-7B --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/akrads/ClaudeLimaRP-Maid-10.7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ClaudeLimaRP-Maid-10.7B-GGUF/resolve/main/ClaudeLimaRP-Maid-10.7B.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Swallow-70b-NVE-RP-i1-GGUF
mradermacher
2024-05-06T05:14:43Z
99
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "ja", "base_model:nitky/Swallow-70b-NVE-RP", "base_model:quantized:nitky/Swallow-70b-NVE-RP", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-05T14:20:52Z
--- base_model: nitky/Swallow-70b-NVE-RP language: - en - ja library_name: transformers license: llama2 model_type: llama quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/nitky/Swallow-70b-NVE-RP <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-NVE-RP-i1-GGUF/resolve/main/Swallow-70b-NVE-RP.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/anarchy-solar-10B-v1-GGUF
mradermacher
2024-05-06T05:14:37Z
0
0
transformers
[ "transformers", "gguf", "ko", "base_model:moondriller/anarchy-solar-10B-v1", "base_model:quantized:moondriller/anarchy-solar-10B-v1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T15:34:15Z
--- base_model: moondriller/anarchy-solar-10B-v1 language: - ko library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/moondriller/anarchy-solar-10B-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/anarchy-solar-10B-v1-GGUF/resolve/main/anarchy-solar-10B-v1.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/anarchy-llama2-13B-v2-GGUF
mradermacher
2024-05-06T05:14:30Z
3
0
transformers
[ "transformers", "gguf", "en", "base_model:moondriller/anarchy-llama2-13B-v2", "base_model:quantized:moondriller/anarchy-llama2-13B-v2", "endpoints_compatible", "region:us" ]
null
2024-04-05T16:58:16Z
--- base_model: moondriller/anarchy-llama2-13B-v2 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/moondriller/anarchy-llama2-13B-v2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.IQ3_XS.gguf) | IQ3_XS | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q3_K_S.gguf) | Q3_K_S | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.IQ3_M.gguf) | IQ3_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q3_K_L.gguf) | Q3_K_L | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.IQ4_XS.gguf) | IQ4_XS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q4_K_M.gguf) | Q4_K_M | 8.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q5_K_S.gguf) | Q5_K_S | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q5_K_M.gguf) | Q5_K_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q6_K.gguf) | Q6_K | 10.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/anarchy-llama2-13B-v2-GGUF/resolve/main/anarchy-llama2-13B-v2.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mergerix-7b-v0.5-GGUF
mradermacher
2024-05-06T05:14:27Z
5
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "automerger/YamshadowExperiment28-7B", "automerger/PasticheInex12-7B", "en", "base_model:MiniMoog/Mergerix-7b-v0.5", "base_model:quantized:MiniMoog/Mergerix-7b-v0.5", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T18:32:48Z
--- base_model: MiniMoog/Mergerix-7b-v0.5 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - automerger/YamshadowExperiment28-7B - automerger/PasticheInex12-7B --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/MiniMoog/Mergerix-7b-v0.5 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mergerix-7b-v0.5-GGUF/resolve/main/Mergerix-7b-v0.5.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/H4na-7B-v0.1-GGUF
mradermacher
2024-05-06T05:14:20Z
23
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "en", "base_model:Smuggling1710/H4na-7B-v0.1", "base_model:quantized:Smuggling1710/H4na-7B-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T19:53:00Z
--- base_model: Smuggling1710/H4na-7B-v0.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Smuggling1710/H4na-7B-v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/H4na-7B-v0.1-GGUF/resolve/main/H4na-7B-v0.1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/13B-HyperMantis-GGUF
mradermacher
2024-05-06T05:14:16Z
97
0
transformers
[ "transformers", "gguf", "llama", "alpaca", "vicuna", "mix", "merge", "model merge", "roleplay", "chat", "instruct", "en", "base_model:digitous/13B-HyperMantis", "base_model:quantized:digitous/13B-HyperMantis", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-05T20:04:53Z
--- base_model: digitous/13B-HyperMantis language: - en library_name: transformers license: other quantized_by: mradermacher tags: - llama - alpaca - vicuna - mix - merge - model merge - roleplay - chat - instruct --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/digitous/13B-HyperMantis <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.IQ3_M.gguf) | IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/13B-HyperMantis-GGUF/resolve/main/13B-HyperMantis.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Swallow-70b-RP-i1-GGUF
mradermacher
2024-05-06T05:13:54Z
16
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "ja", "base_model:nitky/Swallow-70b-RP", "base_model:quantized:nitky/Swallow-70b-RP", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-05T21:40:58Z
--- base_model: nitky/Swallow-70b-RP language: - en - ja library_name: transformers license: llama2 model_type: llama quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/nitky/Swallow-70b-RP <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-IQ1_S.gguf) | i1-IQ1_S | 14.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-IQ1_M.gguf) | i1-IQ1_M | 16.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.5 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.5 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-IQ2_S.gguf) | i1-IQ2_S | 21.6 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-IQ2_M.gguf) | i1-IQ2_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-Q2_K.gguf) | i1-Q2_K | 25.7 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.5 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-IQ3_S.gguf) | i1-IQ3_S | 30.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-IQ3_M.gguf) | i1-IQ3_M | 31.2 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-IQ4_XS.gguf) | i1-IQ4_XS | 37.1 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-Q4_0.gguf) | i1-Q4_0 | 39.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.7 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.0 | | | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-RP-i1-GGUF/resolve/main/Swallow-70b-RP.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/13B-Chimera-GGUF
mradermacher
2024-05-06T05:13:49Z
34
0
transformers
[ "transformers", "gguf", "llama", "cot", "vicuna", "uncensored", "merge", "mix", "gptq", "en", "base_model:digitous/13B-Chimera", "base_model:quantized:digitous/13B-Chimera", "endpoints_compatible", "region:us" ]
null
2024-04-05T23:11:33Z
--- base_model: digitous/13B-Chimera language: - en library_name: transformers quantized_by: mradermacher tags: - llama - cot - vicuna - uncensored - merge - mix - gptq --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/digitous/13B-Chimera <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.IQ3_M.gguf) | IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/13B-Chimera-GGUF/resolve/main/13B-Chimera.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/LemonadeRP-4.5.3-11B-GGUF
mradermacher
2024-05-06T05:13:46Z
6
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:mpasila/LemonadeRP-4.5.3-11B", "base_model:quantized:mpasila/LemonadeRP-4.5.3-11B", "endpoints_compatible", "region:us" ]
null
2024-04-06T00:30:17Z
--- base_model: mpasila/LemonadeRP-4.5.3-11B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/mpasila/LemonadeRP-4.5.3-11B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-11B-GGUF/resolve/main/LemonadeRP-4.5.3-11B.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/mixtral-8x1.5B-GGUF
mradermacher
2024-05-06T05:13:34Z
96
0
transformers
[ "transformers", "gguf", "en", "base_model:sanchit-gandhi/mixtral-8x1.5B", "base_model:quantized:sanchit-gandhi/mixtral-8x1.5B", "endpoints_compatible", "region:us" ]
null
2024-04-06T04:02:35Z
--- base_model: sanchit-gandhi/mixtral-8x1.5B language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/sanchit-gandhi/mixtral-8x1.5B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.Q2_K.gguf) | Q2_K | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.IQ3_XS.gguf) | IQ3_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.IQ3_S.gguf) | IQ3_S | 4.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.Q3_K_S.gguf) | Q3_K_S | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.IQ3_M.gguf) | IQ3_M | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.Q3_K_L.gguf) | Q3_K_L | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.IQ4_XS.gguf) | IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.Q4_K_M.gguf) | Q4_K_M | 5.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.Q5_K_S.gguf) | Q5_K_S | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.Q5_K_M.gguf) | Q5_K_M | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.Q6_K.gguf) | Q6_K | 7.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/mixtral-8x1.5B-GGUF/resolve/main/mixtral-8x1.5B.Q8_0.gguf) | Q8_0 | 9.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF
mradermacher
2024-05-06T05:13:24Z
4
0
transformers
[ "transformers", "gguf", "en", "base_model:kaist-ai/prometheus-8x7b-v2.0-1-pp", "base_model:quantized:kaist-ai/prometheus-8x7b-v2.0-1-pp", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-06T05:27:51Z
--- base_model: kaist-ai/prometheus-8x7b-v2.0-1-pp language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/kaist-ai/prometheus-8x7b-v2.0-1-pp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q2_K.gguf) | Q2_K | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.IQ3_XS.gguf) | IQ3_XS | 19.4 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q3_K_S.gguf) | Q3_K_S | 20.5 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.IQ3_M.gguf) | IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q3_K_L.gguf) | Q3_K_L | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.IQ4_XS.gguf) | IQ4_XS | 25.5 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q5_K_S.gguf) | Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q5_K_M.gguf) | Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q6_K.gguf) | Q6_K | 38.5 | very good quality | | [PART 1](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-1-pp-GGUF/resolve/main/prometheus-8x7b-v2.0-1-pp.Q8_0.gguf.part2of2) | Q8_0 | 49.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/pandafish-3-7B-32k-GGUF
mradermacher
2024-05-06T05:13:16Z
1
0
transformers
[ "transformers", "gguf", "en", "base_model:ichigoberry/pandafish-3-7B-32k", "base_model:quantized:ichigoberry/pandafish-3-7B-32k", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-06T06:14:55Z
--- base_model: ichigoberry/pandafish-3-7B-32k language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ichigoberry/pandafish-3-7B-32k <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/pandafish-3-7B-32k-GGUF/resolve/main/pandafish-3-7B-32k.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/bophades-mistral-truthy-DPO-7B-GGUF
mradermacher
2024-05-06T05:13:05Z
16
0
transformers
[ "transformers", "gguf", "en", "dataset:jondurbin/truthy-dpo-v0.1", "base_model:nbeerbower/bophades-mistral-truthy-DPO-7B", "base_model:quantized:nbeerbower/bophades-mistral-truthy-DPO-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-06T08:25:50Z
--- base_model: nbeerbower/bophades-mistral-truthy-DPO-7B datasets: - jondurbin/truthy-dpo-v0.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/nbeerbower/bophades-mistral-truthy-DPO-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/bophades-mistral-truthy-DPO-7B-GGUF/resolve/main/bophades-mistral-truthy-DPO-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Pioneer-2x7B-GGUF
mradermacher
2024-05-06T05:12:46Z
78
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:hibana2077/Pioneer-2x7B", "base_model:quantized:hibana2077/Pioneer-2x7B", "endpoints_compatible", "region:us" ]
null
2024-04-06T10:24:47Z
--- base_model: hibana2077/Pioneer-2x7B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/hibana2077/Pioneer-2x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Pioneer-2x7B-GGUF/resolve/main/Pioneer-2x7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Wittgenbot-7B-GGUF
mradermacher
2024-05-06T05:12:38Z
6
0
transformers
[ "transformers", "gguf", "en", "base_model:descartesevildemon/Wittgenbot-7B", "base_model:quantized:descartesevildemon/Wittgenbot-7B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-06T10:54:34Z
--- base_model: descartesevildemon/Wittgenbot-7B language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/descartesevildemon/Wittgenbot-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF
mradermacher
2024-05-06T05:12:36Z
5
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Smuggling1710/WestLakev2-IreneRP-Neural-7B-slerp", "jeiku/BuRPInfinity_9B", "en", "base_model:Smuggling1710/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp", "base_model:quantized:Smuggling1710/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp", "endpoints_compatible", "region:us" ]
null
2024-04-06T11:14:58Z
--- base_model: Smuggling1710/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - Smuggling1710/WestLakev2-IreneRP-Neural-7B-slerp - jeiku/BuRPInfinity_9B --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Smuggling1710/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp-GGUF/resolve/main/BuRPInfinWestLakev2-IreneRP-Neural-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF
mradermacher
2024-05-06T05:12:26Z
1
0
transformers
[ "transformers", "gguf", "SkillEnhanced", "mistral", "en", "base_model:HachiML/Swallow-MS-7b-v0.1-ChatMathSkill", "base_model:quantized:HachiML/Swallow-MS-7b-v0.1-ChatMathSkill", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-06T11:24:30Z
--- base_model: HachiML/Swallow-MS-7b-v0.1-ChatMathSkill language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - SkillEnhanced - mistral --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/HachiML/Swallow-MS-7b-v0.1-ChatMathSkill <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q2_K.gguf) | Q2_K | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.IQ3_XS.gguf) | IQ3_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q3_K_L.gguf) | Q3_K_L | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.IQ4_XS.gguf) | IQ4_XS | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q5_K_S.gguf) | Q5_K_S | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q5_K_M.gguf) | Q5_K_M | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q6_K.gguf) | Q6_K | 6.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatMathSkill-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatMathSkill.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Alpacino30b-GGUF
mradermacher
2024-05-06T05:12:21Z
71
0
transformers
[ "transformers", "gguf", "alpaca", "en", "base_model:digitous/Alpacino30b", "base_model:quantized:digitous/Alpacino30b", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-06T11:56:58Z
--- base_model: digitous/Alpacino30b language: - en library_name: transformers license: other quantized_by: mradermacher tags: - alpaca --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/digitous/Alpacino30b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q2_K.gguf) | Q2_K | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.IQ3_XS.gguf) | IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q3_K_S.gguf) | Q3_K_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.IQ3_M.gguf) | IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q3_K_L.gguf) | Q3_K_L | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.IQ4_XS.gguf) | IQ4_XS | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q5_K_S.gguf) | Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q5_K_M.gguf) | Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q6_K.gguf) | Q6_K | 26.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/GPT4-X-Alpasta-30b-GGUF
mradermacher
2024-05-06T05:12:15Z
39
0
transformers
[ "transformers", "gguf", "en", "base_model:MetaIX/GPT4-X-Alpasta-30b", "base_model:quantized:MetaIX/GPT4-X-Alpasta-30b", "endpoints_compatible", "region:us" ]
null
2024-04-06T14:12:48Z
--- base_model: MetaIX/GPT4-X-Alpasta-30b language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/MetaIX/GPT4-X-Alpasta-30b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.Q2_K.gguf) | Q2_K | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.IQ3_XS.gguf) | IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.Q3_K_S.gguf) | Q3_K_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.IQ3_M.gguf) | IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.Q3_K_L.gguf) | Q3_K_L | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.IQ4_XS.gguf) | IQ4_XS | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.Q5_K_S.gguf) | Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.Q5_K_M.gguf) | Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.Q6_K.gguf) | Q6_K | 26.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF/resolve/main/GPT4-X-Alpasta-30b.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Enterredaas-33b-GGUF
mradermacher
2024-05-06T05:12:01Z
15
0
transformers
[ "transformers", "gguf", "en", "base_model:Aeala/Enterredaas-33b", "base_model:quantized:Aeala/Enterredaas-33b", "endpoints_compatible", "region:us" ]
null
2024-04-06T16:27:16Z
--- base_model: Aeala/Enterredaas-33b language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Aeala/Enterredaas-33b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Enterredaas-33b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.Q2_K.gguf) | Q2_K | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.IQ3_XS.gguf) | IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.Q3_K_S.gguf) | Q3_K_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.IQ3_M.gguf) | IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.Q3_K_L.gguf) | Q3_K_L | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.IQ4_XS.gguf) | IQ4_XS | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.Q5_K_S.gguf) | Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.Q5_K_M.gguf) | Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.Q6_K.gguf) | Q6_K | 26.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Enterredaas-33b-GGUF/resolve/main/Enterredaas-33b.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/sixtyoneeighty-4x7B-v1-GGUF
mradermacher
2024-05-06T05:11:55Z
14
0
transformers
[ "transformers", "gguf", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "jambroz/sixtyoneeighty-7b-chat", "en", "base_model:jambroz/sixtyoneeighty-4x7B-v1", "base_model:quantized:jambroz/sixtyoneeighty-4x7B-v1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-06T17:17:17Z
--- base_model: jambroz/sixtyoneeighty-4x7B-v1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - moe - frankenmoe - merge - mergekit - lazymergekit - jambroz/sixtyoneeighty-7b-chat --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jambroz/sixtyoneeighty-4x7B-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.Q2_K.gguf) | Q2_K | 8.9 | | | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.IQ3_XS.gguf) | IQ3_XS | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.Q3_K_S.gguf) | Q3_K_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.IQ3_S.gguf) | IQ3_S | 10.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.IQ3_M.gguf) | IQ3_M | 10.7 | | | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.Q3_K_M.gguf) | Q3_K_M | 11.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.Q3_K_L.gguf) | Q3_K_L | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.IQ4_XS.gguf) | IQ4_XS | 13.1 | | | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.Q4_K_S.gguf) | Q4_K_S | 13.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.Q4_K_M.gguf) | Q4_K_M | 14.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.Q5_K_S.gguf) | Q5_K_S | 16.7 | | | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.Q5_K_M.gguf) | Q5_K_M | 17.2 | | | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.Q6_K.gguf) | Q6_K | 19.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v1-GGUF/resolve/main/sixtyoneeighty-4x7B-v1.Q8_0.gguf) | Q8_0 | 25.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/MermaidMixtral-3x7b-GGUF
mradermacher
2024-05-06T05:11:50Z
45
0
transformers
[ "transformers", "gguf", "en", "base_model:TroyDoesAI/MermaidMixtral-3x7b", "base_model:quantized:TroyDoesAI/MermaidMixtral-3x7b", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-06T17:23:40Z
--- base_model: TroyDoesAI/MermaidMixtral-3x7b language: - en library_name: transformers license: cc-by-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TroyDoesAI/MermaidMixtral-3x7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q2_K.gguf) | Q2_K | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.IQ3_XS.gguf) | IQ3_XS | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q3_K_S.gguf) | Q3_K_S | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.IQ3_S.gguf) | IQ3_S | 8.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.IQ3_M.gguf) | IQ3_M | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q3_K_M.gguf) | Q3_K_M | 9.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q3_K_L.gguf) | Q3_K_L | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.IQ4_XS.gguf) | IQ4_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q4_K_S.gguf) | Q4_K_S | 10.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q5_K_S.gguf) | Q5_K_S | 12.8 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q5_K_M.gguf) | Q5_K_M | 13.2 | | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q6_K.gguf) | Q6_K | 15.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MermaidMixtral-3x7b-GGUF/resolve/main/MermaidMixtral-3x7b.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/GPT4-x-AlpacaDente2-30b-GGUF
mradermacher
2024-05-06T05:11:47Z
88
0
transformers
[ "transformers", "gguf", "en", "base_model:Aeala/GPT4-x-AlpacaDente2-30b", "base_model:quantized:Aeala/GPT4-x-AlpacaDente2-30b", "endpoints_compatible", "region:us" ]
null
2024-04-06T18:41:58Z
--- base_model: Aeala/GPT4-x-AlpacaDente2-30b language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Aeala/GPT4-x-AlpacaDente2-30b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.Q2_K.gguf) | Q2_K | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.IQ3_XS.gguf) | IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.Q3_K_S.gguf) | Q3_K_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.IQ3_M.gguf) | IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.Q3_K_L.gguf) | Q3_K_L | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.IQ4_XS.gguf) | IQ4_XS | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.Q5_K_S.gguf) | Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.Q5_K_M.gguf) | Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.Q6_K.gguf) | Q6_K | 26.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Alpaca-elina-65b-GGUF
mradermacher
2024-05-06T05:11:45Z
5
0
transformers
[ "transformers", "gguf", "en", "base_model:Aeala/Alpaca-elina-65b", "base_model:quantized:Aeala/Alpaca-elina-65b", "endpoints_compatible", "region:us" ]
null
2024-04-06T19:05:16Z
--- base_model: Aeala/Alpaca-elina-65b language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Aeala/Alpaca-elina-65b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Alpaca-elina-65b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q2_K.gguf) | Q2_K | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.IQ3_XS.gguf) | IQ3_XS | 26.7 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.IQ3_S.gguf) | IQ3_S | 28.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q3_K_S.gguf) | Q3_K_S | 28.3 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.IQ3_M.gguf) | IQ3_M | 29.9 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q3_K_M.gguf) | Q3_K_M | 31.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q3_K_L.gguf) | Q3_K_L | 34.7 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.IQ4_XS.gguf) | IQ4_XS | 35.1 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q4_K_S.gguf) | Q4_K_S | 37.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q4_K_M.gguf) | Q4_K_M | 39.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q5_K_S.gguf) | Q5_K_S | 45.0 | | | [GGUF](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q5_K_M.gguf) | Q5_K_M | 46.3 | | | [PART 1](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q6_K.gguf.part2of2) | Q6_K | 53.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpaca-elina-65b-GGUF/resolve/main/Alpaca-elina-65b.Q8_0.gguf.part2of2) | Q8_0 | 69.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/WinterGoddess-1.4x-70B-L2-GGUF
mradermacher
2024-05-06T05:11:42Z
2
0
transformers
[ "transformers", "gguf", "en", "base_model:Sao10K/WinterGoddess-1.4x-70B-L2", "base_model:quantized:Sao10K/WinterGoddess-1.4x-70B-L2", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-06T19:40:17Z
--- base_model: Sao10K/WinterGoddess-1.4x-70B-L2 language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70B-L2-GGUF/resolve/main/WinterGoddess-1.4x-70B-L2.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Alpacino30b-i1-GGUF
mradermacher
2024-05-06T05:11:37Z
215
0
transformers
[ "transformers", "gguf", "alpaca", "en", "base_model:digitous/Alpacino30b", "base_model:quantized:digitous/Alpacino30b", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-06T21:07:29Z
--- base_model: digitous/Alpacino30b language: - en library_name: transformers license: other quantized_by: mradermacher tags: - alpaca --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/digitous/Alpacino30b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Alpacino30b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-IQ1_M.gguf) | i1-IQ1_M | 7.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-Q2_K.gguf) | i1-Q2_K | 12.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-IQ3_S.gguf) | i1-IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.2 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-Q4_0.gguf) | i1-Q4_0 | 18.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF/resolve/main/Alpacino30b.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/PandafishHeatherReReloaded-GGUF
mradermacher
2024-05-06T05:11:22Z
131
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "MysticFoxMagic/HeatherSpell-7b", "ichigoberry/pandafish-2-7b-32k", "en", "base_model:MysticFoxMagic/PandafishHeatherReReloaded", "base_model:quantized:MysticFoxMagic/PandafishHeatherReReloaded", "endpoints_compatible", "region:us" ]
null
2024-04-07T00:10:05Z
--- base_model: MysticFoxMagic/PandafishHeatherReReloaded language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - MysticFoxMagic/HeatherSpell-7b - ichigoberry/pandafish-2-7b-32k --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/MysticFoxMagic/PandafishHeatherReReloaded <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReReloaded-GGUF/resolve/main/PandafishHeatherReReloaded.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/PandafishHeatherReloaded-GGUF
mradermacher
2024-05-06T05:11:11Z
89
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "ichigoberry/pandafish-dt-7b", "MysticFoxMagic/HeatherSpell-7b", "en", "base_model:MysticFoxMagic/PandafishHeatherReloaded", "base_model:quantized:MysticFoxMagic/PandafishHeatherReloaded", "endpoints_compatible", "region:us" ]
null
2024-04-07T01:32:30Z
--- base_model: MysticFoxMagic/PandafishHeatherReloaded language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - ichigoberry/pandafish-dt-7b - MysticFoxMagic/HeatherSpell-7b --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/MysticFoxMagic/PandafishHeatherReloaded <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/PandafishHeatherReloaded-GGUF/resolve/main/PandafishHeatherReloaded.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Tess-72B-v1.5b-GGUF
mradermacher
2024-05-06T05:11:02Z
1
0
transformers
[ "transformers", "gguf", "en", "base_model:migtissera/Tess-72B-v1.5b", "base_model:quantized:migtissera/Tess-72B-v1.5b", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-07T03:47:12Z
--- base_model: migtissera/Tess-72B-v1.5b language: - en library_name: transformers license: other license_link: https://huggingface.co/Qwen/Qwen-72B/blob/main/LICENSE license_name: qwen-72b-licence quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/migtissera/Tess-72B-v1.5b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Tess-72B-v1.5b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q2_K.gguf) | Q2_K | 27.2 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.IQ3_XS.gguf) | IQ3_XS | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.IQ3_S.gguf) | IQ3_S | 31.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q3_K_S.gguf) | Q3_K_S | 31.7 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.IQ3_M.gguf) | IQ3_M | 33.4 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q3_K_M.gguf) | Q3_K_M | 35.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q3_K_L.gguf) | Q3_K_L | 38.6 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.IQ4_XS.gguf) | IQ4_XS | 39.2 | | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q4_K_S.gguf) | Q4_K_S | 41.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q4_K_M.gguf) | Q4_K_M | 43.9 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q5_K_S.gguf.part2of2) | Q5_K_S | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q5_K_M.gguf.part2of2) | Q5_K_M | 51.4 | | | [PART 1](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q6_K.gguf.part2of2) | Q6_K | 59.4 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-72B-v1.5b-GGUF/resolve/main/Tess-72B-v1.5b.Q8_0.gguf.part2of2) | Q8_0 | 76.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Limitless-GGUF
mradermacher
2024-05-06T05:10:56Z
116
0
transformers
[ "transformers", "gguf", "en", "base_model:alkahestry/Limitless", "base_model:quantized:alkahestry/Limitless", "endpoints_compatible", "region:us" ]
null
2024-04-07T04:22:23Z
--- base_model: alkahestry/Limitless language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/alkahestry/Limitless <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Limitless-GGUF/resolve/main/Limitless.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Flammen-Bophades-7B-GGUF
mradermacher
2024-05-06T05:10:47Z
1
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:nbeerbower/Flammen-Bophades-7B", "base_model:quantized:nbeerbower/Flammen-Bophades-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-07T06:01:59Z
--- base_model: nbeerbower/Flammen-Bophades-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/nbeerbower/Flammen-Bophades-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Flammen-Bophades-7B-GGUF/resolve/main/Flammen-Bophades-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/WinterGoddess-1.4x-70b-32k-GGUF
mradermacher
2024-05-06T05:10:33Z
41
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ChuckMcSneed/WinterGoddess-1.4x-70b-32k", "base_model:quantized:ChuckMcSneed/WinterGoddess-1.4x-70b-32k", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-07T08:19:24Z
--- base_model: ChuckMcSneed/WinterGoddess-1.4x-70b-32k language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ChuckMcSneed/WinterGoddess-1.4x-70b-32k <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WinterGoddess-1.4x-70b-32k-GGUF/resolve/main/WinterGoddess-1.4x-70b-32k.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mestral-7B-GGUF
mradermacher
2024-05-06T05:10:26Z
34
0
transformers
[ "transformers", "gguf", "en", "base_model:alkahestry/Mestral-7B", "base_model:quantized:alkahestry/Mestral-7B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-07T09:56:57Z
--- base_model: alkahestry/Mestral-7B language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/alkahestry/Mestral-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mestral-7B-GGUF/resolve/main/Mestral-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/ds-brew-13b-GGUF
mradermacher
2024-05-06T05:10:21Z
21
0
transformers
[ "transformers", "gguf", "llama", "llama-2", "en", "base_model:Doctor-Shotgun/ds-brew-13b", "base_model:quantized:Doctor-Shotgun/ds-brew-13b", "license:agpl-3.0", "endpoints_compatible", "region:us" ]
null
2024-04-07T11:18:47Z
--- base_model: Doctor-Shotgun/ds-brew-13b language: - en library_name: transformers license: agpl-3.0 quantized_by: mradermacher tags: - llama - llama-2 --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Doctor-Shotgun/ds-brew-13b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.IQ3_M.gguf) | IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ds-brew-13b-GGUF/resolve/main/ds-brew-13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/lolicore-test-GGUF
mradermacher
2024-05-06T05:10:17Z
35
0
transformers
[ "transformers", "gguf", "en", "base_model:Rorical/lolicore-test", "base_model:quantized:Rorical/lolicore-test", "endpoints_compatible", "region:us" ]
null
2024-04-07T11:49:16Z
--- base_model: Rorical/lolicore-test language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Rorical/lolicore-test <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.Q2_K.gguf) | Q2_K | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.IQ3_XS.gguf) | IQ3_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.IQ3_S.gguf) | IQ3_S | 0.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.Q3_K_S.gguf) | Q3_K_S | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.IQ3_M.gguf) | IQ3_M | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.Q3_K_L.gguf) | Q3_K_L | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.IQ4_XS.gguf) | IQ4_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.Q5_K_S.gguf) | Q5_K_S | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.Q5_K_M.gguf) | Q5_K_M | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.Q6_K.gguf) | Q6_K | 0.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/lolicore-test-GGUF/resolve/main/lolicore-test.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->