Compressed nanoGPT (enwik8)
Outstanding Compression Results!
- Performance: 1.635 โ 1.637 BPC (+0.002)
- Parameters: 28,801,536 โ 27,359,744
- Compression: 1.053ร smaller (5.0% reduction)
- Quality Loss: Only 0.1% degradation!
This demonstrates near-perfect compression of a character-level transformer.
Usage
from transformers import AutoModel
model = AutoModel.from_pretrained(
"prompterminal/nanogpt-enwik8-compressed-working",
trust_remote_code=True
)
# Generate text
import torch
prompt = torch.randint(0, 6060, (1, 10)) # Random start
output = model.generate(prompt, max_new_tokens=100)
Research Impact
First successful demonstration of high-quality compression on character-level transformers!
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support