ARC-IT: Rule-Conditioned Transformer for ARC-AGI
A novel architecture that solves abstract reasoning tasks (ARC-AGI) by explicitly extracting transformation rules from demonstration pairs and applying them to new inputs:
- GridTokenizer -- Embeds discrete ARC grids (0-11) into continuous patch tokens
- RuleEncoder -- Extracts transformation rules from demo input/output pairs via cross-attention
- RuleApplier -- Applies the learned rules to a test input via cross-attention
- SpatialDecoder -- Converts output tokens to 64x64 grid logits
Architecture
Demo Pairs -> GridTokenizer -> RuleEncoder (cross-attention + aggregation) -> Rule Tokens
Test Input -> GridTokenizer -> RuleApplier (cross-attention to rules) -> SpatialDecoder -> Predicted Grid
Training
- 2-stage training: Full Training -> Hard Focus (AGI-2 oversampling)
- Test-Time Training (TTT): Per-task fine-tuning on demonstration examples
Model Details
- Training step: 18000
- Best validation accuracy: 0.733029360572497
- Hidden size: 384
- Rule Encoder: 2 pair layers, 2 agg layers, 64 rule tokens
- Rule Applier: 4 layers, 8 heads
- Canvas size: 64
Usage
import torch
from arc_it.models.arc_it_model import ARCITModel
model = ARCITModel.from_config(config)
ckpt = torch.load("model.pt", map_location="cpu", weights_only=False)
model.load_state_dict(ckpt["model_state_dict"])
Links
- Repository: github.com/REDDITARUN/arc_it
- ARC-AGI: arcprize.org
- Downloads last month
- 45
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support