kartoun commited on
Commit
741663b
·
verified ·
1 Parent(s): 762b7ce

Upload DBbun_EEG_Validation.ipynb

Browse files
Files changed (1) hide show
  1. DBbun_EEG_Validation.ipynb +1 -129
DBbun_EEG_Validation.ipynb CHANGED
@@ -1,133 +1,5 @@
1
  {
2
  "cells": [
3
- {
4
- "cell_type": "markdown",
5
- "id": "e06d219a",
6
- "metadata": {},
7
- "source": [
8
- "# DBbun EEG — Validation Notebook Overview\n",
9
- "\n",
10
- "*Last updated: 2025-10-06*\n",
11
- "\n",
12
- "This notebook validates the **DBbun EEG pretrained encoder** and your EEG dataset by doing the following:\n",
13
- "\n",
14
- "1. **Load EEG files** (e.g., `.npy` windows or full recordings) and apply basic normalization.\n",
15
- "2. **Window the signals** to match the pretraining configuration (default: 2 s @ 250 Hz).\n",
16
- "3. **Load the pretrained encoder** (if available) and run **inference to extract embeddings**.\n",
17
- "4. **(Optional) Reconstruction check** using the full autoencoder to estimate reconstruction loss.\n",
18
- "5. **Visual diagnostics**: plot raw vs. reconstructed windows, and simple embedding projections (PCA/UMAP).\n",
19
- "6. **Batch evaluation**: compute average loss/variance across a validation split.\n",
20
- "\n",
21
- "> 📌 **Tip:** To use your latest pretrained model artifacts generated by the training script, place these files next to the notebook or set the path variables in the next cell:\n",
22
- ">\n",
23
- "> - `pretrained_out/encoder_state.pt`\n",
24
- "> - `pretrained_out/encoder_traced.pt`\n",
25
- "> - `pretrained_out/model_def.json`\n",
26
- "\n",
27
- "---\n",
28
- "\n",
29
- "## How to use this notebook with your pretrained model\n",
30
- "\n",
31
- "1. **Set paths** in the next cell:\n",
32
- " - `MODEL_DIR = \"pretrained_out\"`\n",
33
- " - `DATA_DIR = r\"d:\\dbbun-eeg\\data\\val_npy\"` (or any folder with `.npy` EEG arrays)\n",
34
- "2. **Run all cells** up to the “Evaluate reconstruction / embeddings” section.\n",
35
- "3. Review:\n",
36
- " - The mean reconstruction loss (`L1`/`Huber`)\n",
37
- " - Example plots of raw vs reconstructed windows\n",
38
- " - Embedding scatter (optional)\n",
39
- "\n",
40
- "If you don't have the **full autoencoder state** and only have the **encoder**:\n",
41
- "- The reconstruction evaluation will be skipped automatically.\n",
42
- "- Embedding extraction and visualization will still run.\n",
43
- "\n",
44
- "---\n",
45
- "\n",
46
- "### Notebook outline (detected headings)\n",
47
- "- # DBbun EEG — Validation & Preview Notebook\n",
48
- "- ## Preview Gallery"
49
- ]
50
- },
51
- {
52
- "cell_type": "markdown",
53
- "id": "f9fa18e8",
54
- "metadata": {},
55
- "source": [
56
- "## Quick-start configuration\n",
57
- "\n",
58
- "Run the next cell to set paths and load the pretrained encoder if present. Adjust the folders to your setup.\n"
59
- ]
60
- },
61
- {
62
- "cell_type": "code",
63
- "execution_count": null,
64
- "id": "3a288feb",
65
- "metadata": {},
66
- "outputs": [],
67
- "source": [
68
- "# Paths\n",
69
- "MODEL_DIR = \"pretrained_out\" # directory containing encoder_state.pt and model_def.json\n",
70
- "DATA_DIR = r\"d:\\dbbun-eeg\\data\\val_npy\" # set to your validation folder (.npy files)\n",
71
- "\n",
72
- "import json, pathlib, torch, torch.nn as nn\n",
73
- "from pathlib import Path\n",
74
- "\n",
75
- "# Minimal encoder class (same architecture used during pretraining)\n",
76
- "class Conv1dEncoder(nn.Module):\n",
77
- " def __init__(self, in_channels, widths=(32,64,128), latent_dim=128, dropout=0.1):\n",
78
- " super().__init__()\n",
79
- " layers = []\n",
80
- " prev = in_channels\n",
81
- " for w in widths:\n",
82
- " layers += [\n",
83
- " nn.Conv1d(prev, w, kernel_size=7, padding=3, stride=2),\n",
84
- " nn.BatchNorm1d(w),\n",
85
- " nn.GELU(),\n",
86
- " nn.Dropout(dropout),\n",
87
- " ]\n",
88
- " prev = w\n",
89
- " self.conv = nn.Sequential(*layers)\n",
90
- " self.pool = nn.AdaptiveAvgPool1d(1)\n",
91
- " self.proj = nn.Linear(prev, latent_dim)\n",
92
- "\n",
93
- " def forward(self, x):\n",
94
- " h = self.conv(x) # (B, W, L')\n",
95
- " g = self.pool(h).squeeze(-1) # (B, W)\n",
96
- " z = self.proj(g) # (B, latent)\n",
97
- " return z, h\n",
98
- "\n",
99
- "# Attempt to load model metadata and weights (if available)\n",
100
- "md_path = Path(MODEL_DIR) / \"model_def.json\"\n",
101
- "enc_path = Path(MODEL_DIR) / \"encoder_state.pt\"\n",
102
- "\n",
103
- "enc = None\n",
104
- "md = None\n",
105
- "if md_path.exists() and enc_path.exists():\n",
106
- " md = json.loads(md_path.read_text())\n",
107
- " enc = Conv1dEncoder(\n",
108
- " in_channels=md[\"channels\"],\n",
109
- " widths=tuple(md[\"encoder_channels\"]),\n",
110
- " latent_dim=md[\"latent_dim\"],\n",
111
- " dropout=md[\"dropout\"],\n",
112
- " )\n",
113
- " enc.load_state_dict(torch.load(enc_path, map_location=\"cpu\"))\n",
114
- " enc.eval()\n",
115
- " print(\"✅ Loaded pretrained encoder:\", enc_path)\n",
116
- "else:\n",
117
- " print(\"⚠️ Pretrained encoder not found in\", MODEL_DIR, \"\\nExpected files: encoder_state.pt and model_def.json\")\n",
118
- "\n",
119
- "# Small utility to read a .npy EEG file as (channels, samples)\n",
120
- "import numpy as np\n",
121
- "def load_eeg_npy(path):\n",
122
- " arr = np.load(path, mmap_mode='r')\n",
123
- " if arr.ndim != 2:\n",
124
- " raise ValueError(f\"Expected 2D array, got {arr.shape} in {path}\")\n",
125
- " return arr\n",
126
- "\n",
127
- "print(\"MODEL_DIR =\", Path(MODEL_DIR).resolve())\n",
128
- "print(\"DATA_DIR =\", Path(DATA_DIR).resolve())\n"
129
- ]
130
- },
131
  {
132
  "cell_type": "markdown",
133
  "id": "513db225",
@@ -994,7 +866,7 @@
994
  },
995
  {
996
  "cell_type": "markdown",
997
- "id": "3d844e56",
998
  "metadata": {},
999
  "source": [
1000
  "# 🧩 Interpreting the Results\n",
 
1
  {
2
  "cells": [
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  {
4
  "cell_type": "markdown",
5
  "id": "513db225",
 
866
  },
867
  {
868
  "cell_type": "markdown",
869
+ "id": "0abd4e7b",
870
  "metadata": {},
871
  "source": [
872
  "# 🧩 Interpreting the Results\n",