RedHatAI/NVIDIA-Nemotron-3-Nano-30B-A3B-FP8 Text Generation • 32B • Updated about 1 hour ago • 91 • 2
inference-optimization/NVIDIA-Nemotron-3-Nano-30B-A3B-FP8 Text Generation • 32B • Updated about 2 hours ago
inference-optimization/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 Text Generation • 32B • Updated about 2 hours ago
inference-optimization/Qwen3-Next-80B-A3B-Thinking-FP8 Text Generation • 81B • Updated about 2 hours ago
inference-optimization/Qwen3-Next-80B-A3B-Instruct-FP8 Text Generation • 81B • Updated about 2 hours ago
NVIDIA-Nemotron-3-Nano-30B-A3B Quantized Models Collection FP8-dynamic, FP8-block, NVFP4, INT4, versions of nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B • 2 items • Updated about 5 hours ago
inference-optimization/NVIDIA-Nemotron-3-Nano-30B-A3B-FP8 Text Generation • 32B • Updated about 2 hours ago
inference-optimization/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 Text Generation • 32B • Updated about 2 hours ago
Qwen3-Next-80B-A3B Quantized Models Collection FP8-dynamic, FP8-block, NVFP4, INT4, INT8 versions of Qwen3-Next-80B-A3B-Instruct and Qwen3-Next-80B-A3B-Thinking Models • 10 items • Updated about 5 hours ago
inference-optimization/Qwen3-Next-80B-A3B-Thinking-FP8 Text Generation • 81B • Updated about 2 hours ago
Qwen3-Next-80B-A3B Quantized Models Collection FP8-dynamic, FP8-block, NVFP4, INT4, INT8 versions of Qwen3-Next-80B-A3B-Instruct and Qwen3-Next-80B-A3B-Thinking Models • 10 items • Updated about 5 hours ago
Qwen3-Next-80B-A3B Quantized Models Collection FP8-dynamic, FP8-block, NVFP4, INT4, INT8 versions of Qwen3-Next-80B-A3B-Instruct and Qwen3-Next-80B-A3B-Thinking Models • 10 items • Updated about 5 hours ago
inference-optimization/Qwen3-Next-80B-A3B-Instruct-FP8 Text Generation • 81B • Updated about 2 hours ago