File size: 2,963 Bytes
95fe3ff
 
 
 
 
 
 
 
 
 
 
 
94ea8b4
 
1f5f2f5
94ea8b4
 
 
 
 
 
1f5f2f5
94ea8b4
 
 
 
 
 
 
 
 
1f5f2f5
94ea8b4
 
eb5a0a8
94ea8b4
eb5a0a8
94ea8b4
eb5a0a8
94ea8b4
eb5a0a8
94ea8b4
1f5f2f5
94ea8b4
eb5a0a8
94ea8b4
 
eb5a0a8
94ea8b4
eb5a0a8
94ea8b4
eb5a0a8
94ea8b4
eb5a0a8
94ea8b4
eb5a0a8
94ea8b4
 
 
1f5f2f5
94ea8b4
 
 
 
 
 
 
eb5a0a8
 
94ea8b4
 
 
 
eb5a0a8
94ea8b4
 
 
 
 
 
 
 
 
 
eb5a0a8
94ea8b4
 
eb5a0a8
94ea8b4
 
 
 
 
 
eb5a0a8
94ea8b4
 
eb5a0a8
94ea8b4
eb5a0a8
94ea8b4
eb5a0a8
94ea8b4
eb5a0a8
94ea8b4
eb5a0a8
94ea8b4
eb5a0a8
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
title: EvoTransformer V2.1
emoji: πŸ”₯
colorFrom: pink
colorTo: blue
sdk: gradio
sdk_version: 5.36.2
app_file: app.py
pinned: false
license: apache-2.0
---

EvoTransformer – An Evolving Reasoning Engine
A custom-built Transformer that learns how to reason β€” and evolves its own architecture with real-time feedback.

πŸ” What It Does
βœ… Compares two options and picks the more logical one
βœ… Trained on PIQA-style physical reasoning
βœ… Competes live with GPT-3.5 on real-world prompts
βœ… Accepts user feedback and evolves post-deployment
βœ… Visualizes model performance generation by generation

βš™οΈ Architecture Overview
Component	Spec
Layers	6 Γ— TransformerEncoder
Hidden Size	384
Attention Heads	6
FFN Dim	1024
Memory Module	Optional (πŸ”• disabled in v2.1)
Total Parameters	~13M
Built With	torch.nn.TransformerEncoder

πŸ§ͺ Training Configuration
Task: Commonsense Reasoning (PIQA benchmark)

Dataset: 1,000 training | 500 validation

Loss: CrossEntropyLoss

Optimizer: Adam

Epochs: 5

Environment: Google Colab (T4 / A100 GPU)

πŸ” Live Feedback Loop
Every user interaction fuels EvoTransformer's growth:

βœ… Feedback is logged to Firebase

πŸ”„ Model can be retrained via a button in the app

πŸ“ˆ Accuracy and architecture history is visualized

🧬 Architecture can mutate across generations

πŸš€ Try the Live Demo
▢️ Launch on Hugging Face Spaces
Ask a question, give two options, and watch Evo reason, decide, and grow.

πŸ’‘ Why EvoTransformer?
Feature	Benefit
βœ… Fully Custom Architecture	No reliance on pretrained backbones
βœ… Lightweight & Fast	Runs on Colab or entry-level GPUs
βœ… Evolves with Feedback	Learns even after deployment
βœ… Transparent	You can inspect, retrain, and mutate it
βœ… GPT-Competitive	Performs well in commonsense benchmarks


πŸ“ Repository Structure
bash
Copy
Edit
πŸ“¦ root
β”œβ”€β”€ app.py              # Main Gradio app
β”œβ”€β”€ evo_model.py        # EvoTransformer architecture
β”œβ”€β”€ inference.py        # Generates model outputs
β”œβ”€β”€ logger.py           # Logs user feedback to Firebase
β”œβ”€β”€ watchdog.py         # Retraining + logging engine
β”œβ”€β”€ dashboard.py        # Accuracy/evolution plots
β”œβ”€β”€ init_model.py       # Weight & config initializer
β”œβ”€β”€ firebase_key.json   # πŸ” Private Firebase credentials
└── trained_model/      # Fine-tuned weights + logs
πŸ“œ License
Apache 2.0
Open to experimentation, extension, and collaboration.
Fork it. Mutate it. Evolve it.

πŸ‘€ Author
Dr. Heman Mohabeer
Founder, Intelligent Africa Ltd
πŸš€ AI Strategist β€’ Researcher β€’ Visionary
πŸ“§ heman.m@iafrica.solutions
🌍 LinkedIn

πŸ™Œ Contribute
Want to push Evo further?

🧬 Propose new mutations

πŸ§ͺ Submit architecture variants

🧠 Benchmark against other reasoning datasets

πŸ“¦ PRs welcome for live retraining, visual dashboards, or token optimization

Let’s build the world’s most adaptable reasoning AI β€” one mutation at a time.