bobchenyx commited on
Commit
2e48367
·
verified ·
1 Parent(s): 37ff8d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -16,9 +16,8 @@ pipeline_tag: text-generation
16
 
17
  This page is going to be deprecated. For other quantized versions, please refer to [moxin-org/DeepSeek-V3-0324-Moxin-GGUF](https://huggingface.co/moxin-org/DeepSeek-V3-0324-Moxin-GGUF) for more details.
18
 
19
- Original model: Adopting **BF16** & **Imatrix** from [unsloth/DeepSeek-V3-0324-BF16](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF-UD/tree/main/BF16).
20
 
21
- All quants made with modification of llama.cpp based on [Bobchenyx/llama.cpp](https://github.com/Bobchenyx/llama.cpp).
22
  ```
23
  - IQ1_S : 129.94 GiB (1.66 BPW)
24
  - IQ1_M : 144.24 GiB (1.85 BPW)
 
16
 
17
  This page is going to be deprecated. For other quantized versions, please refer to [moxin-org/DeepSeek-V3-0324-Moxin-GGUF](https://huggingface.co/moxin-org/DeepSeek-V3-0324-Moxin-GGUF) for more details.
18
 
 
19
 
20
+ All quants made based on [moxin-org/CC-MoE](https://github.com/moxin-org/CC-MoE).
21
  ```
22
  - IQ1_S : 129.94 GiB (1.66 BPW)
23
  - IQ1_M : 144.24 GiB (1.85 BPW)