RedHatAI/SmolLM-360M-Instruct-quantized.w8a8
Text Generation
•
0.4B
•
Updated
•
4
RedHatAI/SmolLM-135M-Instruct-quantized.w8a16
Text Generation
•
0.1B
•
Updated
•
6
RedHatAI/gemma-2-27b-it-quantized.w8a16
Text Generation
•
9B
•
Updated
•
56
RedHatAI/Meta-Llama-3.1-405B-Instruct-quantized.w8a16
Text Generation
•
105B
•
Updated
•
10
•
1
RedHatAI/gemma-2-2b-it-quantized.w8a8
Text Generation
•
3B
•
Updated
•
6
RedHatAI/Mistral-Nemo-Instruct-2407-quantized.w4a16
Text Generation
•
3B
•
Updated
•
108
•
4
RedHatAI/SmolLM-1.7B-Instruct-quantized.w8a16
Text Generation
•
0.6B
•
Updated
•
5
RedHatAI/gemma-2-2b-it-quantized.w4a16
Text Generation
•
1B
•
Updated
•
2.62k
•
1
RedHatAI/gemma-2-9b-it-quantized.w4a16
Text Generation
•
3B
•
Updated
•
1.27k
•
2
RedHatAI/Phi-3-small-128k-instruct-quantized.w8a16
Text Generation
•
3B
•
Updated
•
6
RedHatAI/gemma-2-2b-quantized.w8a16
Text Generation
•
2B
•
Updated
•
7
RedHatAI/gemma-2-2b-it-quantized.w8a16
Text Generation
•
2B
•
Updated
•
32
•
1
RedHatAI/gemma-2-9b-it-quantized.w8a16
Text Generation
•
4B
•
Updated
•
1.04k
•
1
RedHatAI/gemma-2-2b-it-FP8
3B
•
Updated
•
1.07k
•
1
RedHatAI/starcoder2-15b-quantized.w8a8
Text Generation
•
16B
•
Updated
•
35
RedHatAI/starcoder2-7b-quantized.w8a8
Text Generation
•
7B
•
Updated
•
22
RedHatAI/starcoder2-3b-quantized.w8a8
Text Generation
•
3B
•
Updated
•
22
RedHatAI/starcoder2-7b-quantized.w8a16
Text Generation
•
2B
•
Updated
•
17
RedHatAI/starcoder2-3b-quantized.w8a16
Text Generation
•
1B
•
Updated
•
20
RedHatAI/starcoder2-15b-quantized.w8a16
Text Generation
•
4B
•
Updated
•
16
RedHatAI/Meta-Llama-3.1-70B-quantized.w8a8
Text Generation
•
71B
•
Updated
•
7
RedHatAI/Meta-Llama-3.1-405B-FP8
Text Generation
•
410B
•
Updated
•
118
RedHatAI/Meta-Llama-3.1-70B-quantized.w8a16
Text Generation
•
19B
•
Updated
•
5
RedHatAI/starcoder2-3b-FP8
Text Generation
•
3B
•
Updated
•
21
RedHatAI/starcoder2-7b-FP8
Text Generation
•
7B
•
Updated
•
17
RedHatAI/starcoder2-15b-FP8
Text Generation
•
16B
•
Updated
•
25
RedHatAI/Mistral-Nemo-Instruct-2407-quantized.w8a16
Text Generation
•
4B
•
Updated
•
15
RedHatAI/Meta-Llama-3.1-8B-quantized.w8a16
Text Generation
•
3B
•
Updated
•
9
•
1
RedHatAI/Meta-Llama-3.1-70B-FP8
Text Generation
•
71B
•
Updated
•
1.09k
•
2
RedHatAI/Mistral-Large-Instruct-2407-FP8
Text Generation
•
123B
•
Updated
•
45