Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Melvin56
/
Qwen3-4B-abliterated-GGUF
like
1
Text Generation
GGUF
chat
abliterated
uncensored
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Qwen3-4B-abliterated-GGUF
27 GB
1 contributor
History:
12 commits
Melvin56
Update README.md
dedc66f
verified
7 months ago
.gitattributes
2.11 kB
Upload qwen3-4b-abliterated-BF16.gguf with huggingface_hub
7 months ago
README.md
2.78 kB
Update README.md
7 months ago
imatrix.dat
3.84 MB
xet
Upload imatrix.dat with huggingface_hub
7 months ago
qwen3-4b-abliterated-BF16.gguf
8.05 GB
xet
Upload qwen3-4b-abliterated-BF16.gguf with huggingface_hub
7 months ago
qwen3-4b-abliterated-IQ4_XS.gguf
Safe
2.27 GB
xet
Upload qwen3-4b-abliterated-IQ4_XS.gguf with huggingface_hub
7 months ago
qwen3-4b-abliterated-Q2_K.gguf
Safe
1.67 GB
xet
Upload qwen3-4b-abliterated-Q2_K.gguf with huggingface_hub
7 months ago
qwen3-4b-abliterated-Q3_K_M.gguf
Safe
2.08 GB
xet
Upload qwen3-4b-abliterated-Q3_K_M.gguf with huggingface_hub
7 months ago
qwen3-4b-abliterated-Q4_K_M.gguf
Safe
2.5 GB
xet
Upload qwen3-4b-abliterated-Q4_K_M.gguf with huggingface_hub
7 months ago
qwen3-4b-abliterated-Q5_K_M.gguf
Safe
2.89 GB
xet
Upload qwen3-4b-abliterated-Q5_K_M.gguf with huggingface_hub
7 months ago
qwen3-4b-abliterated-Q6_K.gguf
Safe
3.31 GB
xet
Upload qwen3-4b-abliterated-Q6_K.gguf with huggingface_hub
7 months ago
qwen3-4b-abliterated-Q8_0.gguf
Safe
4.28 GB
xet
Upload qwen3-4b-abliterated-Q8_0.gguf with huggingface_hub
7 months ago