File size: 3,405 Bytes
f3658f0
e52acf5
c7c7f9c
 
 
 
b44e228
86670f7
70c04db
86670f7
82f3327
de9ad81
 
70c04db
 
 
 
 
 
 
 
de9ad81
70c04db
 
8fc65ee
86670f7
6d4c527
70c04db
 
 
 
7dfdf0d
70c04db
8fc65ee
c7c7f9c
 
51de1f5
 
0ebecdf
51de1f5
b3c1263
c7c7f9c
 
61ef53a
57656fc
 
61ef53a
c7c7f9c
 
 
 
 
 
24d7af4
c7c7f9c
61ef53a
c7c7f9c
 
61ef53a
 
c7c7f9c
61ef53a
 
c7c7f9c
61ef53a
 
01002cd
61ef53a
 
01002cd
61ef53a
51f1643
2b45d02
 
 
a7ffc2a
b9a77d9
ff3a6c5
51f1643
8a567e7
0ebecdf
8a567e7
 
 
94a49e9
c7c7f9c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
license: apache-2.0
tags:
- requests
- gguf
- quantized
---
<!--
Apologies for any inconveniences caused!
-->

![requests-banner/png](https://huggingface.co/Lewdiculous/Model-Requests/resolve/main/requests-banner.png)

> [!IMPORTANT]
> # Status: <br>
> **Quant-Requests are [PAUSED](https://huggingface.co/Lewdiculous/Model-Requests/edit/main/README.md#status-) momentarily.** <br>
> I sincerely apologize for disrupting your experience! <br> <br>
> Only if you **want to** and you are able... <br>
> [**You can support my personal endeavours here (Ko-fi).**](https://go.datasets.fyi/thank-you-hf) <br> <br>
> Eventually I want to be able set aside resources for a dedicated infrastructure. <br>
> In the meantime, I'll be working to provide whenever possible with the resources available at the time. <br>


<!--
> [!TIP]
> **Quant-Requests are open.** <br>
> I apologize for disrupting your experience. <br>
> Only if you **want to** and you are able... <br>
> [**You can support my personal endeavours here (Ko-fi).**](https://go.datasets.fyi/thank-you-hf) <br> <br>
> Eventually I want to be able set aside resources for a dedicated infrastructure. <br>
> In the meantime, I'll be working to provide whenever possible with the resources available at the time. <br>

-->

# Welcome to my GGUF-IQ-Imatrix Model Quantization Requests card!

Please read everything.

This card is meant only to request GGUF-IQ-Imatrix quants for models that meet the requirements below.

**Requirements to request GGUF-Imatrix model quantizations:**

For the model:
- Maximum model parameter size of ~~11B~~ **12B**. Small note is that models sizes larger than 8B parameters may take longer to process and upload than the smaller ones.<br>
*At the moment I am unable to accept requests for larger models due to hardware/time limitations.* <br>
*Preferably for Mistral and LLama-3 based models in the creative/roleplay niche.* <br>
*If you need quants for a bigger model, you can try requesting at [mradermacher's](https://huggingface.co/mradermacher/model_requests). He's doing an amazing work.*

Important:
- Fill the request template as outlined in the next section.

#### How to request a model quantization:

1. Open a [**New Discussion**](https://huggingface.co/Lewdiculous/Model-Requests/discussions/new) titled "`Request: Model-Author/Model-Name`", for example, "`Request: Nitral-AI/Infinitely-Laydiculous-7B`", without the quotation marks.

2. Include the following template in your new discussion post, you can just copy and paste it as is, and fill the required information by replacing the {{placeholders}} ([example request here](https://huggingface.co/Lewdiculous/Model-Requests/discussions/1)):

```
**[Required] Model name:** <br>
{{replace-this}}

**[Required] Model link:** <br>
{{replace-this}}

**[Required] Brief description:** <br>
{{replace-this}}

**[Required] An image/direct image link to represent the model (square shaped):** <br>
{{replace-this}}

**[Optional] Additonal quants (if you want any):** <br>

<!-- Keep in mind that anything bellow I/Q3 isn't recommended,   -->
<!-- since for these smaller models the results will likely be   -->
<!-- highly incoherent rendering them unusable for your needs.   -->


Default list of quants for reference:

        "IQ3_M", "IQ3_XXS",
        "Q4_0", "Q4_K_M", "Q4_K_S", "IQ4_XS",
        "Q5_K_M", "Q5_K_S",
        "Q6_K",
        "Q8_0"

```