File size: 4,712 Bytes
b507942 6b4379a 1bf9a56 186a9e9 6ae7c9d eea9185 0ebe306 b507942 4923450 b00e247 4923450 b00e247 4923450 b00e247 4923450 b00e247 4923450 b00e247 4923450 b00e247 4923450 b00e247 4923450 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 |
---
license: cc-by-nc-sa-4.0
language:
- en
- tr
tags:
- ai
- brain
- eeg
- neuroscience
- deeplearning
- mind
- bci
- text
- ieeg
- emg
- sentence
- number
- mind-to-text
- dl
- artificial-intelligence
- first-of-world
- eeg-to-text
pipeline_tag: text-generation
library_name: tf
---
# bai-64 Mind | EEG-to-Text Model [BETA]๐ง โ๏ธ
Classify imagined speech commands from EEG brain signals using deep learning.



## Overview
This project enables Brain-Computer Interface (BCI) applications by decoding imagined directional commands ("Up", "Down", "Left", "Right") from EEG brain signals. Users think about a direction without speaking, and the system predicts their intended command.
## Quick Start
### Installation
```bash
pip install -r requirements.txt
```
### Basic Usage
```python
import numpy as np
from tensorflow import keras
# Load pre-trained model
model = keras.models.load_model('path/to/your/model.h5')
# Your EEG data (1 second, 64 channels, 250 Hz sampling)
eeg_data = np.random.randn(250, 64) # Replace with real EEG
# Make prediction
prediction = model.predict(eeg_data.reshape(1, 250, 64))
classes = ['Up', 'Down', 'Left', 'Right']
predicted_command = classes[np.argmax(prediction)]
print(f"Predicted command: {predicted_command}")
print(f"Confidence: {np.max(prediction):.3f}")
```
## Real-Time BCI Application
```python
from analysis import InnerSpeechAnalyzer
# Initialize predictor
analyzer = InnerSpeechAnalyzer('path/to/your/model.h5')
predictor = analyzer.create_real_time_predictor()
# Real-time loop
while True:
eeg_data = capture_eeg_signal() # Your EEG acquisition function
command, confidence = predictor.predict_thought(eeg_data)
if confidence > 0.8:
execute_command(command) # Your command execution
print(f"Executing: {command}")
```
## Hardware Requirements
### EEG Device
- **Channels**: 64 Channels (10-20 system)
- **Sampling Rate**: 250+ Hz
- **Impedance**: <5kฮฉ
- **Bandwidth**: 0.5-100 Hz
### Recommended Devices
- OpenBCI Cyton + Daisy (16+ channels) (64 channels recommended)
- Emotiv EPOC X (14 channels) (64 channels recommended)
- g.tec g.USBamp (Professional) (64 channels recommended)
## Applications
- ๐ฆฝ **Assistive Technology**: Control for paralyzed patients
- ๐ฎ **Gaming**: Mind-controlled games and VR
- ๐ค **Robotics**: Brain-controlled robot navigation
- ๐ป **Silent Computing**: Hands-free computer control
- ๐งช **Research**: Neuroscience and BCI studies
## Data Format
Your EEG data should be:
- **Shape**: (250, 64) per trial
- **Duration**: 1 second recording
- **Channels**: 64 EEG electrodes
- **Sampling**: 250 Hz
- **Classes**: ["Up", "Down", "Left", "Right"]
## Features
โ
**Ready-to-use** pre-trained model
โ
**Real-time prediction** for BCI applications
โ
**Custom training** with your own EEG data
โ
**Multiple architectures** (CNN-LSTM, Transformer)
โ
**EEG preprocessing** pipeline included
โ
**Cross-platform** support (Windows, macOS, Linux)
## Dependencies
```bash
tensorflow>=2.8.0,<3.0.0
scikit-learn>=1.0.0
numpy>=1.21.0
scipy>=1.7.0
pandas>=1.3.0
mne>=1.0.0
matplotlib>=3.5.0
seaborn>=0.11.0
```
## Example Use Cases
### Wheelchair Control
```python
# User thinks "forward" โ wheelchair moves forward
# User thinks "left" โ wheelchair turns left
```
### Smart Home
```python
# User thinks "up" โ lights turn on
# User thinks "down" โ lights turn off
```
### Gaming
```python
# User thinks "right" โ character moves right
# Mental commands for game control
```
## Support
- **Web Site**: [Neurazum](https://neurazum.com)
- **Email**: [contact@neurazum.com](mailto:contact@neurazum.com)
## Note
**This project is in the *BETA* phase. Use at your own risk. Due to the process, low accuracy rates may be observed. In addition, since the data belongs to <span style="color: #ff8d26; "><b>Neurazum</b></span>, the function structure may change in future models.**
## License
CC-BY-NC-SA 4.0 - see [LICENSE](https://creativecommons.org/licenses/by-nc-sa/4.0/) file for details.
### Acknowledgments
1. Neurazum's own data set was used. This data set is closed source.
2. Nieto, N., Peterson, V., Rufiner, H. L., Kamienkowski, J. E., & Spies, R. (2021).
"Thinking out loud, an open access EEG-based BCI dataset for inner speech recognition."
bioRxiv. https://doi.org/10.1101/2021.04.19.440473
---
*Enable mind-controlled technology with EEG! ๐*
<span style="color: #ff8d26; "><b>Neurazum</b> AI Department</span> |