{ "cells": [ { "cell_type": "markdown", "id": "20fce00f", "metadata": {}, "source": [ "### Deep Learning\n", "#### Multi-layer perceptron (MLP) for regression (from scratch)\n", "A Multilayer Perceptron (MLP) for regression is a feedforward neural network with:\n", "- Input Layer: Receives feature vectors.\n", "- Hidden Layers: Use nonlinear activations (e.g., ReLU) to learn patterns.\n", "- Output Layer: Single or multiple neurons with linear activations to predict continuous values.\n", "- Loss Function: Here, the Mean Squared Error (MSE) to minimize prediction errors.\n", "\n", "The pseudo-code of training a three-layer MLP (generally any MLP) for regression with **backpropagation** is specified as: \n", "
1. **Initialize** weights $W_1$,$b_1$,$W_2$,$b_2$.\n", "
2. For each epoch:\n", " - **Shuffle** training data.\n", " - **Split** into mini-batches of size batch_size.\n", " - For each mini-batch:\n", " - **Forward pass:** Compute predictions $A_2$.\n", " - **Compute loss:** MSE using $Y−A_2$.\n", " - **Backward pass:** Compute gradients.\n", " - **Update weights:** With gradient descent.\n", "3. **Evaluate** on validation set periodically.\n", "\n", "Here, $A_2$ is the output of the output layer. $Y$ is the matrix of desired output such that each row $i$ is the desired vector $\\boldsymbol{y}_i$ for the the input vector $\\boldsymbol{x}_i$. We assume that we are given $n$ data pairs $(\\boldsymbol{x}_i,\\boldsymbol{y}_i)$ \n", "\n", "**Hint:** For the hidden layer, we use activation functions sush as: *Logistic*, *ReLU*, and *Tanh*.\n", "\n", "
\n", "In the following, we implement a three-layer MLP for regression using a matrix form of the backpropagation with mini-batch. The whole formulae are given in the link below:\n", "
\n", "https://github.com/ostad-ai/Machine-Learning\n", "
Explanation: https://www.pinterest.com/HamedShahHosseini/Machine-Learning" ] }, { "cell_type": "code", "execution_count": 1, "id": "df3dc448", "metadata": {}, "outputs": [], "source": [ "# Import required modules\n", "import numpy as np\n", "from sklearn.datasets import make_regression\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.metrics import mean_squared_error" ] }, { "cell_type": "code", "execution_count": 2, "id": "9d06b4cc", "metadata": {}, "outputs": [], "source": [ "# The class that implements the MLP with three layers for regression\n", "class MLPRegressor:\n", " def __init__(self, input_size, hidden_size, output_size):\n", " # He initialization for ReLU\n", " self.W1 = np.random.randn(input_size, hidden_size) * np.sqrt(2. / input_size)\n", " self.b1 = np.zeros((1, hidden_size))\n", " self.W2 = np.random.randn(hidden_size, output_size) * np.sqrt(2. / hidden_size)\n", " self.b2 = np.zeros((1, output_size))\n", "\n", " # Activation function ReLU \n", " def relu(self, x):\n", " return np.maximum(0, x)\n", "\n", " # Derivative of ReLU\n", " def relu_derivative(self, x):\n", " return (x > 0).astype(float)\n", " \n", " # Forward pass\n", " def forward(self, X):\n", " self.Z1 = X @ self.W1 + self.b1\n", " self.A1 = self.relu(self.Z1)\n", " self.Z2 = self.A1 @ self.W2 + self.b2\n", " self.A2 = self.Z2 # Linear activation\n", " return self.A2\n", "\n", " # Compute Mean Squared Error (MSE)\n", " def mse_loss(self, y_pred, y_true):\n", " return np.mean((y_true - y_pred) ** 2) # y_true first!\n", " \n", " # Backward pass\n", " def backward(self, X, y, learning_rate):\n", " n = X.shape[0]\n", "\n", " # Output layer gradients (y_true - y_pred)\n", " dZ2 = -(y - self.A2) \n", " dW2 = (1 / n) * (self.A1.T @ dZ2)\n", " db2 = (1 / n) * np.sum(dZ2, axis=0, keepdims=True)\n", "\n", " # Hidden layer gradients\n", " dA1 = dZ2 @ self.W2.T\n", " dZ1 = dA1 * self.relu_derivative(self.Z1)\n", " dW1 = (1 / n) * (X.T @ dZ1)\n", " db1 = (1 / n) * np.sum(dZ1, axis=0, keepdims=True)\n", "\n", " # Update parameters\n", " self.W1 -= learning_rate * dW1\n", " self.b1 -= learning_rate * db1\n", " self.W2 -= learning_rate * dW2\n", " self.b2 -= learning_rate * db2\n", " \n", " # Train with with mini-batch and backpropagation\n", " def train(self, X, y, epochs, learning_rate, batch_size, X_val=None, y_val=None):\n", " n_samples = X.shape[0]\n", " for epoch in range(epochs):\n", " # Shuffle data\n", " indices = np.arange(n_samples)\n", " np.random.shuffle(indices)\n", " X_shuffled = X[indices]\n", " y_shuffled = y[indices]\n", "\n", " # Mini-batch training\n", " for i in range(0, n_samples, batch_size):\n", " X_batch = X_shuffled[i:i + batch_size]\n", " y_batch = y_shuffled[i:i + batch_size]\n", " self.forward(X_batch)\n", " self.backward(X_batch, y_batch, learning_rate)\n", "\n", " # Log progress\n", " if epoch % 10 == 0:\n", " train_pred = self.forward(X)\n", " train_loss = self.mse_loss(train_pred, y)\n", " if X_val is not None:\n", " val_pred = self.forward(X_val)\n", " val_loss = self.mse_loss(val_pred, y_val)\n", " print(f\"Epoch {epoch}, Train MSE: {train_loss:.4f}, Val MSE: {val_loss:.4f}\")\n", " else:\n", " print(f\"Epoch {epoch}, Train MSE: {train_loss:.4f}\")\n", " \n", " # Compute output for the input matrix X \n", " def predict(self, X):\n", " return self.forward(X)" ] }, { "cell_type": "code", "execution_count": 3, "id": "7cb5407a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch 0, Train MSE: 871.5028, Val MSE: 908.2278\n", "Epoch 10, Train MSE: 7.1552, Val MSE: 9.6900\n", "Epoch 20, Train MSE: 9.9743, Val MSE: 13.0336\n", "Epoch 30, Train MSE: 1.1262, Val MSE: 2.1430\n", "Epoch 40, Train MSE: 0.3651, Val MSE: 1.3042\n", "Epoch 50, Train MSE: 0.3173, Val MSE: 1.1293\n", "Epoch 60, Train MSE: 0.3466, Val MSE: 1.0649\n", "Epoch 70, Train MSE: 0.6222, Val MSE: 1.3919\n", "Epoch 80, Train MSE: 0.9323, Val MSE: 1.5319\n", "Epoch 90, Train MSE: 0.2779, Val MSE: 1.0138\n", "\n", "Final Validation MSE: 0.8744\n" ] } ], "source": [ "# Example usage\n", "if __name__ == \"__main__\":\n", " X, y = make_regression(n_samples=1000, n_features=10, n_targets=1, noise=0.1, random_state=42)\n", " y = y.reshape(-1, 1) # Ensure y is 2D\n", " X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)\n", "\n", " mlp = MLPRegressor(input_size=10, hidden_size=64, output_size=1)\n", " mlp.train(X_train, y_train, epochs=100, learning_rate=0.01, batch_size=32, X_val=X_val, y_val=y_val)\n", "\n", " y_pred = mlp.predict(X_val)\n", " final_mse = mean_squared_error(y_val, y_pred) # sklearn uses (y_true, y_pred)\n", " print(f\"\\nFinal Validation MSE: {final_mse:.4f}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "1867d7bd", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.1" } }, "nbformat": 4, "nbformat_minor": 5 }