YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Morpho-Logic Engine (MLE) β Adaptive Learning System
Overview
The Morpho-Logic Engine (MLE) is a high-dimensional sparse distributed memory system with energy-based dynamics, optimized for CPU performance through bit-slicing SIMD operations. It learns continuously during inference without classical backpropagation, using purely local, energy-driven updates.
Core Architecture
The system comprises five integrated modules that co-evolve during operation:
1. Memory β Adaptive Sparse Address Table
- 4096-bit binary vectors with target sparsity
5% (200 active bits) - Dynamic creation: new vectors spawn for recurrent or under-represented patterns
- Fusion & specialization: close vectors merge; context-dependent specializations branch off
- Local reorganization: semantic neighborhood coherence is improved iteratively
- Controlled forgetting: pruning of under-used entries prevents drift
2. Routing β Hamming Distance + Bit-Slicing SIMD
- Vectors packed into 64 Γ uint64 slices
- Parallel Hamming distance computation via bit-twiddling popcount
- Inverted index per slice for sub-linear candidate retrieval
- Learned route cache: frequently traversed queryβneighbor mappings are memorized
3. Binding β Circular Convolution
- Role-filler binding via circular convolution in frequency domain (FFT)
- Structure composition: multiple role-filler pairs superposed into composite vectors
- Robust unbinding: recover fillers from bound representations
4. Energy Landscape β Learnable Coherence Function
- Hamming energy: local coherence via neighbor distances
- Hebbian-like associations: co-occurring vectors in low-energy states strengthen links
- Anti-Hebbian for instability: high-energy configurations weaken spurious associations
- Adaptive biases: per-bit biases shift based on experience
- No global gradient: all updates are purely local
5. Inference β Online Learning through Energy Minimization
- Stochastic bit-flip descent with simulated annealing temperature schedule
- Metropolis-Hastings acceptance for exploration/exploitation balance
- Learning during inference: associations, biases, and routes update at every iteration
- Post-inference reinforcement: stable low-energy trajectories are consolidated
Key Capabilities
Continuous Online Learning
The system learns while it reasons. Every inference pass updates:
- Vector co-activation weights
- Energy landscape associations
- Routing cache entries
- Memory structure (creation, fusion, specialization)
Generalization through Composition
- Binding/unbinding enables compositional reasoning
- Pattern abstraction detects recurrent low-energy trajectories and compiles them into new memory units
- Structure reuse: existing sub-patterns are recycled in novel contexts
Semantic Coherence
Local reorganization ensures vectors that are close in Hamming space correspond to semantically related concepts. Coherence score is continuously monitored.
CPU-Optimized Performance
- All core operations use vectorized NumPy and JIT-compiled Numba kernels
- No dense matrix multiplications
- Bit-slicing reduces memory bandwidth by 64Γ
- Hamming distances computed via XOR + popcount
Benchmark Results
Learning confirmed: β Energy decreased with experience
Binding accuracy: 100% (10/10)
Semantic coherence: 0.996
Avg inference time: ~540 ms
Memory growth: controlled (auto-pruning)
Convergence rate: ~78%
Usage
from mle import MLESystem
import numpy as np
# Initialize
mle = MLESystem(
memory_capacity=2000,
online_learning=True,
temperature=0.5,
)
# Create a sparse input vector
vec = np.zeros(4096, dtype=np.uint8)
vec[np.random.choice(4096, size=200, replace=False)] = 1
# Process (inference + learning)
result = mle.process(vec)
print(f"Converged: {result.converged}")
print(f"Energy: {result.energy_trajectory[-1]:.1f}")
# Query neighbors
neighbors = mle.query(vec, k=5)
# Check system health
mle.print_summary()
Directory Structure
mle/
βββ __init__.py # Package exports
βββ memory.py # Adaptive Sparse Address Table
βββ routing.py # Hamming router with bit-slicing
βββ binding.py # Circular convolution binder
βββ energy.py # Learnable energy landscape
βββ inference.py # Online learning inference engine
βββ mle_system.py # Full system integration + metrics
βββ tests.py # Comprehensive benchmark suite
Design Principles
- Locality: every update touches only a neighborhood, no global passes
- Sparsity: 5% active bits β 95% of computation skipped implicitly
- Energy as teacher: low energy = good, high energy = bad, no labels needed
- Memory is computation: the memory table is the model; no separate weights
- Continuity: training and inference are the same operation
Future Directions
- Multi-resolution binding for hierarchical structures
- Cross-modal binding (vision + language in shared space)
- Energy landscape visualization and analysis
- Distributed memory shards for web-scale operation
- Integration with LLM token embeddings for hybrid reasoning
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support