π§ NeuroGolf 2026: Ultra-Efficient ARC-AGI Solver
π Overview
This repository contains the implementation of NeuroGolf 2026, an ultra-efficient model designed to solve Abstraction and Reasoning Corpus (ARC-AGI) image transformations.
The project focuses on maximizing reasoning capability while strictly adhering to extreme model size constraints required for competition submission.
π Competition Constraints
The model is strictly optimized to meet the following requirements:
- ONNX File Size Limit: β€ 1.44 MB
- Parameter Budget:
- ~360K parameters (Float32)
- ~1.4M parameters (INT8 quantized)
- Input/Output Shape:
(1, 10, 30, 30)for both input and output logits
ποΈ Architecture
The system uses a TeacherβStudent Distillation framework to compress high-level reasoning into a micro-scale deployable model.
π§ Mega-Teacher Model (MegaTeacherARCNet)
- Purpose: Captures complex patterns and logic across 400+ ARC tasks
- Dimensions: 512 hidden units, 16 residual blocks deep
- Technique: Standard convolutions + deep residual architecture for maximum pattern recognition
β‘ Student Model (UltraTinyARCNet)
- Purpose: Final deployable model optimized for strict size limits
- Dimensions: 56 hidden units, 5 residual blocks deep
π§ Key Techniques
- Depthwise Separable Convolutions β ~10Γ parameter reduction
- No Bias Terms β
bias=Falsein Conv2d to reduce parameter count - Residual Blocks β Maintain gradient flow in ultra-small networks
π Training Pipeline
1οΈβ£ Teacher Training
- Train Mega-Teacher for 50 epochs
- Dataset: Full 400+ ARC tasks
- Augmentation: 8Γ (rotations + flips)
2οΈβ£ Knowledge Distillation
- Student learns from teacherβs soft probability distributions
- Transfers βdark knowledgeβ
- Achieves better generalization vs hard-label training
3οΈβ£ Pruning & Fine-Tuning
βοΈ Pruning
- Remove 30β35% of low-magnitude weights
- Method: L1 unstructured pruning
- Ensures ONNX file remains under 1.44 MB
π§ Fine-Tuning
- 20 epochs recovery training
- Restores performance lost during pruning
βοΈ Installation & Usage
π Prerequisites
- Linux environment (Debian/Kali recommended)
- Python 3.10+
- NVIDIA GPU with CUDA support (optimized for 2Γ T4 setup)
π οΈ Setup
pip install torch torchvision numpy onnx onnxruntime
mkdir -p data/training/
# Place ARC task JSON files in data/training/
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support