Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
PrunaAI
/
Qwen-Qwen2-72B-Instruct-GGUF-smashed
like
1
Follow
Pruna AI
300
Pruna AI
GGUF
conversational
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
151bd4c
Qwen-Qwen2-72B-Instruct-GGUF-smashed
262 GB
1 contributor
History:
21 commits
sharpenb
Upload Qwen2-72B-Instruct.fp16.bin-00001-of-00008.gguf with huggingface_hub
151bd4c
verified
over 1 year ago
.gitattributes
3.19 kB
Upload Qwen2-72B-Instruct.fp16.bin-00001-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q3_K_M.gguf
Safe
37.7 GB
xet
Upload Qwen2-72B-Instruct.Q3_K_M.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q3_K_S.gguf
Safe
34.5 GB
xet
Upload Qwen2-72B-Instruct.Q3_K_S.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q5_1.gguf-00001-of-00008.gguf
Safe
7.99 GB
xet
Upload Qwen2-72B-Instruct.Q5_1.gguf-00001-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q5_1.gguf-00004-of-00008.gguf
Safe
7.14 GB
xet
Upload Qwen2-72B-Instruct.Q5_1.gguf-00004-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q5_K_M.gguf-00002-of-00008.gguf
Safe
7.03 GB
xet
Upload Qwen2-72B-Instruct.Q5_K_M.gguf-00002-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q5_K_M.gguf-00004-of-00008.gguf
Safe
7.02 GB
xet
Upload Qwen2-72B-Instruct.Q5_K_M.gguf-00004-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q5_K_M.gguf-00005-of-00008.gguf
Safe
6.86 GB
xet
Upload Qwen2-72B-Instruct.Q5_K_M.gguf-00005-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q5_K_M.gguf-00007-of-00008.gguf
Safe
7.24 GB
xet
Upload Qwen2-72B-Instruct.Q5_K_M.gguf-00007-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q5_K_M.gguf-00008-of-00008.gguf
Safe
4.6 GB
xet
Upload Qwen2-72B-Instruct.Q5_K_M.gguf-00008-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q6_K.gguf-00003-of-00008.gguf
Safe
8.1 GB
xet
Upload Qwen2-72B-Instruct.Q6_K.gguf-00003-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q6_K.gguf-00006-of-00008.gguf
Safe
8.11 GB
xet
Upload Qwen2-72B-Instruct.Q6_K.gguf-00006-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q8_0.gguf-00001-of-00008.gguf
Safe
11.3 GB
xet
Upload Qwen2-72B-Instruct.Q8_0.gguf-00001-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q8_0.gguf-00004-of-00008.gguf
Safe
10.1 GB
xet
Upload Qwen2-72B-Instruct.Q8_0.gguf-00004-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q8_0.gguf-00005-of-00008.gguf
Safe
9.99 GB
xet
Upload Qwen2-72B-Instruct.Q8_0.gguf-00005-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q8_0.gguf-00006-of-00008.gguf
Safe
9.74 GB
xet
Upload Qwen2-72B-Instruct.Q8_0.gguf-00006-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.Q8_0.gguf-00008-of-00008.gguf
Safe
6.14 GB
xet
Upload Qwen2-72B-Instruct.Q8_0.gguf-00008-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.fp16.bin-00001-of-00008.gguf
Safe
21.3 GB
xet
Upload Qwen2-72B-Instruct.fp16.bin-00001-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.fp16.bin-00002-of-00008.gguf
Safe
19 GB
xet
Upload Qwen2-72B-Instruct.fp16.bin-00002-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.fp16.bin-00004-of-00008.gguf
Safe
19 GB
xet
Upload Qwen2-72B-Instruct.fp16.bin-00004-of-00008.gguf with huggingface_hub
over 1 year ago
Qwen2-72B-Instruct.fp16.bin-00005-of-00008.gguf
Safe
18.8 GB
xet
Upload Qwen2-72B-Instruct.fp16.bin-00005-of-00008.gguf with huggingface_hub
over 1 year ago