EXL3 quantization of DeepSeek-Prover-V2-7B, 8 bits per weight, including output layers.
- Downloads last month
- 11
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for isogen/DeepSeek-Prover-V2-7B-exl3-8bpw-h8
Base model
deepseek-ai/DeepSeek-Prover-V2-7B