lila-e8-time2026
Lila-E8 trained on Time-2026 temporal lattice stories
Model Details
- Architecture: Lila-E8 (Lie Lattice Attention)
- E8 Quantization: 240 roots
- Geometric Attention: E8-biased
- Training Steps: 9000
- Framework: PyTorch
Usage
import torch
from model.lila_e8 import LilaE8Config, LilaE8
# Load checkpoint
checkpoint = torch.load('checkpoint_step_9000.pt', map_location='cpu')
config = checkpoint['config']
model = LilaE8(config)
model.load_state_dict(checkpoint['model_state_dict'])
# Generate
# (see inference examples in repo)
Training
Trained on NVIDIA RTX 3080 Ti using Nix reproducible environment.
See sovereign-lila-e8 for training code.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support