Edit Models filters
Apps
Inference Providers
Active filters:
quantized
QuantStack/Wan2.1_I2V_14B_FusionX-GGUF

MaziyarPanahi/Qwen3-1.7B-GGUF
•
2B•
Updated
•
163k
•
4
Text Generation
QuantStack/Phantom_Wan_14B_FusionX-GGUF

MaziyarPanahi/Llama-3.2-1B-Instruct-GGUF
•
1B•
Updated
•
161k
•
15
Text Generation

MaziyarPanahi/gemma-3-1b-it-GGUF
•
1.0B•
Updated
•
173k
•
7
Text Generation

RedHatAI/Kimi-K2-Instruct-quantized.w4a16
•
Updated
•
5.99k
•
8
Text Generation

SandLogicTechnologies/MedGemma-4B-IT-GGUF
4B•
Updated
•
103
•
2

nvidia/DeepSeek-R1-FP4-v2
•
394B•
Updated
•
2
•
2
Text Generation

MaziyarPanahi/ChatMusician-GGUF
•
7B•
Updated
•
128
•
15
Text Generation

AetherArchitectural/GGUF-Quantization-Script
•
Updated
•
67
Text Generation

MaziyarPanahi/Mistral-Nemo-Instruct-2407-GGUF
•
12B•
Updated
•
168k
•
49
Text Generation

legraphista/Palmyra-Fin-70B-32K-IMat-GGUF
•
71B•
Updated
•
1.68k
•
9
Text Generation

MaziyarPanahi/Llama-3.2-3B-Instruct-GGUF
•
3B•
Updated
•
159k
•
14
Text Generation

MaziyarPanahi/MistralNemoTiny-GGUF
•
5B•
Updated
•
51
•
2
Text Generation

MaziyarPanahi/Mistral-Large-Instruct-2411-GGUF
•
123B•
Updated
•
157k
•
2
Text Generation

MaziyarPanahi/Llama-3.3-70B-Instruct-GGUF
•
71B•
Updated
•
192k
•
15
Text Generation

RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamic
•
71B•
Updated
•
8.77k
•
9
Text Generation

RedHatAI/phi-4-quantized.w8a8
•
15B•
Updated
•
1.3k
•
2
Text Generation

RedHatAI/phi-4-quantized.w4a16
•
3B•
Updated
•
1.34k
•
3
Text Generation

MaziyarPanahi/gemma-3-4b-it-GGUF
•
4B•
Updated
•
165k
•
10
Text Generation

MaziyarPanahi/DeepHermes-3-Llama-3-3B-Preview-abliterated-GGUF
•
3B•
Updated
•
51
•
1
Text Generation

nvidia/Llama-4-Scout-17B-16E-Instruct-FP4
62B•
Updated
•
184
•
1

MaziyarPanahi/Qwen3-4B-GGUF
•
4B•
Updated
•
172k
•
5
Text Generation

RedHatAI/Qwen3-4B-quantized.w4a16
•
1B•
Updated
•
568
•
1
Text Generation

RedHatAI/Qwen3-32B-FP8-dynamic
•
33B•
Updated
•
3.59k
•
12
Text Generation

RedHatAI/Qwen3-14B-FP8-dynamic
•
15B•
Updated
•
390
•
3
Text Generation

RedHatAI/Qwen3-32B-quantized.w4a16
•
6B•
Updated
•
2.64k
•
9
Text Generation
QuantStack/Wan2.1_T2V_14B_FusionX-GGUF
•
14B•
Updated
•
15.1k
•
22
Text-to-Video

humbleakh/chain-of-zoom-8bit-complete-pipeline
•
Updated
•
1
Image-to-Image