Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
14.9
TFLOPS
31
28
332
Erik Scholz
Green-Sky
Follow
webtim's profile picture
raincandy-u's profile picture
GrennKren's profile picture
38 followers
·
29 following
Green-Sky
AI & ML interests
None yet
Recent Activity
liked
a model
about 5 hours ago
prism-ml/Ternary-Bonsai-8B-gguf
reacted
to
eaddario
's
post
with 🚀
1 day ago
Experimental global target bits‑per‑weight quantization of google/gemma-4-E2B-it, google/gemma-4-E4B-it and google/gemma-4-26B-A4B-it Unlike standard llama.cpp quantizations that rely on fixed type heuristics (e.g., Q4_K_M), the Target BPW approach optimizes per-tensor precision where it matters the most, and produces high quality models that meet a precise global file size target. Key Advantages: - VRAM Maximization: Can generate high quality models sized exactly to fit hardware constraints (e.g., fitting the model into exactly 24GB VRAM). - Data-Driven Precision: Quantization mix is determined by actual weight error sensitivity rather than hardcoded rules, often yielding better PPL/KLD size trade-offs. Full benchmarks (PPL, KLD, ARC, MMLU, etc.) and methodology in the models' cards https://huggingface.co/eaddario/gemma-4-E2B-it-GGUF https://huggingface.co/eaddario/gemma-4-E4B-it-GGUF https://huggingface.co/eaddario/gemma-4-26B-A4B-it-GGUF
liked
a model
2 days ago
LuffyTheFox/Qwen3.6-35B-A3B-Plus-Uncensored-Wasserstein-GGUF
View all activity
Organizations
Green-Sky
's datasets
3
Sort: Recently updated
Green-Sky/mmlu-redux-2.0-for-llama.cpp
Preview
•
Updated
13 days ago
•
192
Green-Sky/mmlu-redux-for-llama.cpp
Preview
•
Updated
13 days ago
•
133
Green-Sky/LongBench-v2-for-llama.cpp
Viewer
•
Updated
14 days ago
•
503
•
24