Spaces:
Sleeping
Sleeping
File size: 2,963 Bytes
95fe3ff 94ea8b4 1f5f2f5 94ea8b4 1f5f2f5 94ea8b4 1f5f2f5 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 1f5f2f5 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 1f5f2f5 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 94ea8b4 eb5a0a8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
---
title: EvoTransformer V2.1
emoji: π₯
colorFrom: pink
colorTo: blue
sdk: gradio
sdk_version: 5.36.2
app_file: app.py
pinned: false
license: apache-2.0
---
EvoTransformer β An Evolving Reasoning Engine
A custom-built Transformer that learns how to reason β and evolves its own architecture with real-time feedback.
π What It Does
β
Compares two options and picks the more logical one
β
Trained on PIQA-style physical reasoning
β
Competes live with GPT-3.5 on real-world prompts
β
Accepts user feedback and evolves post-deployment
β
Visualizes model performance generation by generation
βοΈ Architecture Overview
Component Spec
Layers 6 Γ TransformerEncoder
Hidden Size 384
Attention Heads 6
FFN Dim 1024
Memory Module Optional (π disabled in v2.1)
Total Parameters ~13M
Built With torch.nn.TransformerEncoder
π§ͺ Training Configuration
Task: Commonsense Reasoning (PIQA benchmark)
Dataset: 1,000 training | 500 validation
Loss: CrossEntropyLoss
Optimizer: Adam
Epochs: 5
Environment: Google Colab (T4 / A100 GPU)
π Live Feedback Loop
Every user interaction fuels EvoTransformer's growth:
β
Feedback is logged to Firebase
π Model can be retrained via a button in the app
π Accuracy and architecture history is visualized
𧬠Architecture can mutate across generations
π Try the Live Demo
βΆοΈ Launch on Hugging Face Spaces
Ask a question, give two options, and watch Evo reason, decide, and grow.
π‘ Why EvoTransformer?
Feature Benefit
β
Fully Custom Architecture No reliance on pretrained backbones
β
Lightweight & Fast Runs on Colab or entry-level GPUs
β
Evolves with Feedback Learns even after deployment
β
Transparent You can inspect, retrain, and mutate it
β
GPT-Competitive Performs well in commonsense benchmarks
π Repository Structure
bash
Copy
Edit
π¦ root
βββ app.py # Main Gradio app
βββ evo_model.py # EvoTransformer architecture
βββ inference.py # Generates model outputs
βββ logger.py # Logs user feedback to Firebase
βββ watchdog.py # Retraining + logging engine
βββ dashboard.py # Accuracy/evolution plots
βββ init_model.py # Weight & config initializer
βββ firebase_key.json # π Private Firebase credentials
βββ trained_model/ # Fine-tuned weights + logs
π License
Apache 2.0
Open to experimentation, extension, and collaboration.
Fork it. Mutate it. Evolve it.
π€ Author
Dr. Heman Mohabeer
Founder, Intelligent Africa Ltd
π AI Strategist β’ Researcher β’ Visionary
π§ heman.m@iafrica.solutions
π LinkedIn
π Contribute
Want to push Evo further?
𧬠Propose new mutations
π§ͺ Submit architecture variants
π§ Benchmark against other reasoning datasets
π¦ PRs welcome for live retraining, visual dashboards, or token optimization
Letβs build the worldβs most adaptable reasoning AI β one mutation at a time.
|