eyupipler commited on
Commit
4923450
·
verified ·
1 Parent(s): 08df98a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +150 -8
README.md CHANGED
@@ -25,18 +25,160 @@ pipeline_tag: feature-extraction
25
  library_name: tf
26
  ---
27
 
28
- # bai-64 Mind (Preview) (TR)
29
 
30
- #### bai-64 Mind modeli, aklınızdaki kelimeleri ve sayıları dijital ortama aktarabilmek üzere eğitilen derin öğrenme modelidir. EEG üzerinde çalışan bu model heyecan vermekle kalmıyor aynı zamanda da dünyanın en büyüğü ve en güçlüsü olma niteliği taşıyor!
31
 
32
- #### Keyifli düşünceler...
 
 
33
 
34
- ## --------------
35
 
36
- # bai-64 Mind (Preview) (EN)
37
 
38
- #### bai-64 Mind is a deep learning model trained to digitize words and numbers in your mind. This model, which works on EEG, is not only exciting but also the largest and most powerful in the world!
39
 
40
- #### Enjoy your thoughts...
41
 
42
- **BETA model, 24 Ağustos'ta yayında! / BETA model, available on 24 August!**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  library_name: tf
26
  ---
27
 
28
+ # bai-64 Mind | EEG-to-Text Model [BETA]🧠✍️
29
 
30
+ Classify imagined speech commands from EEG brain signals using deep learning.
31
 
32
+ ![Python](https://img.shields.io/badge/Python-3.10+-blue.svg)
33
+ ![TensorFlow](https://img.shields.io/badge/TensorFlow-2.x-orange.svg)
34
+ ![License](https://img.shields.io/badge/License-CC_BY_NC_SA_4.0-green)
35
 
36
+ ## Overview
37
 
38
+ This project enables Brain-Computer Interface (BCI) applications by decoding imagined directional commands ("Up", "Down", "Left", "Right") from EEG brain signals. Users think about a direction without speaking, and the system predicts their intended command.
39
 
40
+ ## Quick Start
41
 
42
+ ### Installation
43
 
44
+ ```bash
45
+ pip install -r requirements.txt
46
+ ```
47
+
48
+ ### Basic Usage
49
+
50
+ ```python
51
+ import numpy as np
52
+ from tensorflow import keras
53
+
54
+ # Load pre-trained model
55
+ model = keras.models.load_model('path/to/your/model.h5')
56
+
57
+ # Your EEG data (1 second, 64 channels, 250 Hz sampling)
58
+ eeg_data = np.random.randn(250, 64) # Replace with real EEG
59
+
60
+ # Make prediction
61
+ prediction = model.predict(eeg_data.reshape(1, 250, 64))
62
+ classes = ['Up', 'Down', 'Left', 'Right']
63
+ predicted_command = classes[np.argmax(prediction)]
64
+
65
+ print(f"Predicted command: {predicted_command}")
66
+ print(f"Confidence: {np.max(prediction):.3f}")
67
+ ```
68
+
69
+ ## Real-Time BCI Application
70
+
71
+ ```python
72
+ from analysis import InnerSpeechAnalyzer
73
+
74
+ # Initialize predictor
75
+ analyzer = InnerSpeechAnalyzer('path/to/your/model.h5')
76
+ predictor = analyzer.create_real_time_predictor()
77
+
78
+ # Real-time loop
79
+ while True:
80
+ eeg_data = capture_eeg_signal() # Your EEG acquisition function
81
+ command, confidence = predictor.predict_thought(eeg_data)
82
+
83
+ if confidence > 0.8:
84
+ execute_command(command) # Your command execution
85
+ print(f"Executing: {command}")
86
+ ```
87
+
88
+ ## Hardware Requirements
89
+
90
+ ### EEG Device
91
+ - **Channels**: 64 Channels (10-20 system)
92
+ - **Sampling Rate**: 250+ Hz
93
+ - **Impedance**: <5kΩ
94
+ - **Bandwidth**: 0.5-100 Hz
95
+
96
+ ### Recommended Devices
97
+ - OpenBCI Cyton + Daisy (16+ channels) (64 channels recommended)
98
+ - Emotiv EPOC X (14 channels) (64 channels recommended)
99
+ - g.tec g.USBamp (Professional) (64 channels recommended)
100
+
101
+ ## Applications
102
+
103
+ - 🦽 **Assistive Technology**: Control for paralyzed patients
104
+ - 🎮 **Gaming**: Mind-controlled games and VR
105
+ - 🤖 **Robotics**: Brain-controlled robot navigation
106
+ - 💻 **Silent Computing**: Hands-free computer control
107
+ - 🧪 **Research**: Neuroscience and BCI studies
108
+
109
+ ## Data Format
110
+
111
+ Your EEG data should be:
112
+ - **Shape**: (250, 64) per trial
113
+ - **Duration**: 1 second recording
114
+ - **Channels**: 64 EEG electrodes
115
+ - **Sampling**: 250 Hz
116
+ - **Classes**: ["Up", "Down", "Left", "Right"]
117
+
118
+ ## Features
119
+
120
+ ✅ **Ready-to-use** pre-trained model
121
+ ✅ **Real-time prediction** for BCI applications
122
+ ✅ **Custom training** with your own EEG data
123
+ ✅ **Multiple architectures** (CNN-LSTM, Transformer)
124
+ ✅ **EEG preprocessing** pipeline included
125
+ ✅ **Cross-platform** support (Windows, macOS, Linux)
126
+
127
+ ## Dependencies
128
+
129
+ ```bash
130
+ tensorflow>=2.8.0,<3.0.0
131
+ scikit-learn>=1.0.0
132
+ numpy>=1.21.0
133
+ scipy>=1.7.0
134
+ pandas>=1.3.0
135
+ mne>=1.0.0
136
+ matplotlib>=3.5.0
137
+ seaborn>=0.11.0
138
+ ```
139
+
140
+ ## Example Use Cases
141
+
142
+ ### Wheelchair Control
143
+ ```python
144
+ # User thinks "forward" → wheelchair moves forward
145
+ # User thinks "left" → wheelchair turns left
146
+ ```
147
+
148
+ ### Smart Home
149
+ ```python
150
+ # User thinks "up" → lights turn on
151
+ # User thinks "down" → lights turn off
152
+ ```
153
+
154
+ ### Gaming
155
+ ```python
156
+ # User thinks "right" → character moves right
157
+ # Mental commands for game control
158
+ ```
159
+
160
+ ## Support
161
+
162
+ - **Web Site**: [Neurazum](https://neurazum.com)
163
+ - **Email**: [contact@neurazum.com](mailto:contact@neurazum.com)
164
+
165
+ ## Note
166
+
167
+ **This project is in the *BETA* phase. Use at your own risk. Due to the process, low accuracy rates may be observed. In addition, since the data belongs to <span style="color: #ff8d26; "><b>Neurazum</b></span>, the function structure may change in future models.**
168
+
169
+ ## License
170
+
171
+ CC-BY-NC-SA 4.0 - see [LICENSE](https://creativecommons.org/licenses/by-nc-sa/4.0/) file for details.
172
+
173
+ ### Acknowledgments
174
+
175
+ 1. Neurazum's own data set was used. This data set is closed source.
176
+ 2. Nieto, N., Peterson, V., Rufiner, H. L., Kamienkowski, J. E., & Spies, R. (2021).
177
+ "Thinking out loud, an open access EEG-based BCI dataset for inner speech recognition."
178
+ bioRxiv. https://doi.org/10.1101/2021.04.19.440473
179
+
180
+ ---
181
+
182
+ *Enable mind-controlled technology with EEG! 🚀*
183
+
184
+ <span style="color: #ff8d26; "><b>Neurazum</b> AI Department</span>