{ "cells": [ { "cell_type": "markdown", "id": "dedefa38", "metadata": {}, "source": [ "## Machine Learning\n", "### Subgradient Method for Lasso Regression and Elastic Net\n", "In **subgradient method**, we move in the negative of subgradient of the loss function in order to find the parameters. So, if the loss function is $L(\\boldsymbol{w})$, then we update the parameter vector $\\boldsymbol{w}$ by the subgradient of $L(\\boldsymbol{w})$, denoted by $\\partial L(\\boldsymbol{w})$:\n", "
$\\boldsymbol{w}\\leftarrow \\boldsymbol{w}-\\eta_k\\partial L(\\boldsymbol{w})$\n", "
where $\\eta_k>0$ is the **learning rate** (also called *step size*) at time step $k$.\n", "
In **Elastic Net**, we use the following loss function:\n", "
$L_{EN}(\\boldsymbol{w})=\\frac{1}{2}||\\boldsymbol{y}-X\\boldsymbol{w}||^2+\\lambda_1 ||\\boldsymbol{w}||_1+\\frac{\\lambda_2}{2} ||\\boldsymbol{w}||^2$\n", "
**Hint:** If we set $\\lambda_2$ to zero, we get to the **Lasso**:\n", "
$L_{Lasso}(\\boldsymbol{w})=\\frac{1}{2}||\\boldsymbol{y}-X\\boldsymbol{w}||^2+\\lambda ||\\boldsymbol{w}||_1$\n", "
Now, we compute $\\partial L_{EN}(\\boldsymbol{w})$ by:\n", "
$\\partial L_{EN}(\\boldsymbol{w})=-X^T(\\boldsymbol{y}-X\\boldsymbol{w})+\\lambda_1 \\partial ||\\boldsymbol{w}||_1+\\lambda_2 \\boldsymbol{w}$\n", "
such that\n", "
$\\partial ||\\boldsymbol{w}||_1=[\\partial |w_0|,\\partial |w_1|,...,\\partial |w_{q-1}| ]^T$\n", "
where $\\partial |w_i|=sign(w_i)$ if $w_i\\ne0$; otherwise $[-1,1]$ \n", "

**Reminder:** We have data points $(\\boldsymbol{x}_i,y_i)$ where the first components of $\\boldsymbol{x}_i$ are one. Thus, the rows of matrix $X$ are composed of $\\boldsymbol{x}^T_i$ such that the first column of $X$ is all one. Vectors are denoted here by bold symbols, and they are all column vectors.\n", "

In the following, we download the file *diabetes.csv*, which is our dataset, composing of 768 rows and 9 columns. Its last column holds the values of $y_i$, while the rest of columns holds the values of $\\boldsymbol{x}^T_i$. in fact, each row of the dataset is a data point $(\\boldsymbol{x}^T_i,y_i)$ \n", " - First we load the dataset, and then normalize each column of its input data (excluding the last column).\n", " - Next, the subgradient method is used for Elastic Net to estimate the parameters.\n", " - For deeper discussion on subgradient method, see our post in Repository **Optimization**.\n", " - Finally, we measure the accuracy of the model for *binary classification*.\n", " \n", "**Hint:** There are better subgradient-based methods for *Elastic Net* and *Lasso* such as **Coordinate Descent** that we will discuss in the future. \n", "
\n", "The Python code at: https://github.com/ostad-ai/Machine-Learning\n", "
Explanation: https://www.pinterest.com/HamedShahHosseini/Machine-Learning/" ] }, { "cell_type": "code", "execution_count": 1, "id": "85af9aac", "metadata": {}, "outputs": [], "source": [ "# importing required modules\n", "import numpy as np\n", "import pandas as pd" ] }, { "cell_type": "code", "execution_count": 2, "id": "dddddcd7", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The dataset has 768 rows and 9 columns\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
PregnanciesGlucoseBloodPressureSkinThicknessInsulinBMIDiabetesPedigreeFunctionAgeOutcome
061487235033.60.627501
11856629026.60.351310
28183640023.30.672321
318966239428.10.167210
40137403516843.12.288331
\n", "
" ], "text/plain": [ " Pregnancies Glucose BloodPressure SkinThickness Insulin BMI \\\n", "0 6 148 72 35 0 33.6 \n", "1 1 85 66 29 0 26.6 \n", "2 8 183 64 0 0 23.3 \n", "3 1 89 66 23 94 28.1 \n", "4 0 137 40 35 168 43.1 \n", "\n", " DiabetesPedigreeFunction Age Outcome \n", "0 0.627 50 1 \n", "1 0.351 31 0 \n", "2 0.672 32 1 \n", "3 0.167 21 0 \n", "4 2.288 33 1 " ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# loading the csv dataset\n", "filepath='./diabetes.csv'\n", "df=pd.read_csv(filepath)\n", "rows,cols=df.shape\n", "print(f'The dataset has {rows} rows and {cols} columns')\n", "df.head() # showing top five rows" ] }, { "cell_type": "code", "execution_count": 3, "id": "bf32a4ff", "metadata": {}, "outputs": [], "source": [ "# separating pairs X_i and y_i into matrix Xs and vector ys\n", "# and then normalizing each data column separately, except the last column\n", "ys=df['Outcome'].values\n", "df_xs=df.drop(['Outcome'],axis=1)\n", "df_xs=(df_xs-df_xs.mean())/df_xs.std()\n", "Xs=df_xs.to_numpy() #converting to a numpy array" ] }, { "cell_type": "code", "execution_count": 4, "id": "b9960049", "metadata": {}, "outputs": [], "source": [ "# Subgradient method for Elastic Net (also for Lasso)\n", "# Xs is a matrix with n rows and q-1 columns\n", "# ys is a column vector of size n holding the dependent values yi\n", "# etta0 is the learning rate constant\n", "def subgradient_EN(Xs,ys,iter=1000,etta0=.01,landa1=5,landa2=1):\n", " X=np.ones((Xs.shape[0],Xs.shape[1]+1))\n", " X[:,1:]=Xs.copy()\n", " w=.1*np.random.rand(X.shape[1]).reshape(-1,1)\n", " for k in range(iter):\n", " gk=-X.T@(ys.reshape(-1,1)-X@w)\n", " gk+=landa1*np.sign(w)+landa2*w\n", " etta=etta0/np.sqrt(np.sum(gk**2))\n", " w-=etta*gk\n", " return w.flatten()" ] }, { "cell_type": "markdown", "id": "fb2f43fb", "metadata": {}, "source": [ "In the following cell, we use the subgradient method with its default $\\lambda_1$ and $\\lambda_2$.\n", "However, you can change their values and observe the difference in parameters and/or accuracies." ] }, { "cell_type": "code", "execution_count": 5, "id": "0c3a7617", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The estimated parameters:\n", "[ 0.3420026 0.06483115 0.18184957 -0.03265791 0.00229541 -0.00719638\n", " 0.09793499 0.04316883 0.02638846]\n" ] } ], "source": [ "# example\n", "# estimated parameters for diabetes \n", "# because of L1 norm, \n", "# we see some (usually unneccesary) parameters are near to zero\n", "w_hat=subgradient_EN(Xs,ys)\n", "print(f'The estimated parameters:\\n{w_hat}')" ] }, { "cell_type": "code", "execution_count": 6, "id": "dca3f94b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The accuracy of the model for classification: 0.7721354166666666\n" ] } ], "source": [ "# measuring accuracy of the model for classification\n", "# we use value of .5 to thereshold output to zero or one\n", "X=np.ones((Xs.shape[0],Xs.shape[1]+1))\n", "X[:,1:]=Xs.copy()\n", "ys_hat=np.int16(X@w_hat.reshape(-1,1)>.5).flatten() # estimated ys\n", "accuracy=np.sum(ys_hat==ys)/len(ys)\n", "print(f'The accuracy of the model for classification: {accuracy}')" ] }, { "cell_type": "code", "execution_count": null, "id": "8b6d8a05", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.1" } }, "nbformat": 4, "nbformat_minor": 5 }