{
"cells": [
{
"cell_type": "markdown",
"id": "dedefa38",
"metadata": {},
"source": [
"## Machine Learning\n",
"### Subgradient Method for Lasso Regression and Elastic Net\n",
"In **subgradient method**, we move in the negative of subgradient of the loss function in order to find the parameters. So, if the loss function is $L(\\boldsymbol{w})$, then we update the parameter vector $\\boldsymbol{w}$ by the subgradient of $L(\\boldsymbol{w})$, denoted by $\\partial L(\\boldsymbol{w})$:\n",
"
$\\boldsymbol{w}\\leftarrow \\boldsymbol{w}-\\eta_k\\partial L(\\boldsymbol{w})$\n",
"
where $\\eta_k>0$ is the **learning rate** (also called *step size*) at time step $k$.\n",
"
In **Elastic Net**, we use the following loss function:\n",
"
$L_{EN}(\\boldsymbol{w})=\\frac{1}{2}||\\boldsymbol{y}-X\\boldsymbol{w}||^2+\\lambda_1 ||\\boldsymbol{w}||_1+\\frac{\\lambda_2}{2} ||\\boldsymbol{w}||^2$\n",
"
**Hint:** If we set $\\lambda_2$ to zero, we get to the **Lasso**:\n",
"
$L_{Lasso}(\\boldsymbol{w})=\\frac{1}{2}||\\boldsymbol{y}-X\\boldsymbol{w}||^2+\\lambda ||\\boldsymbol{w}||_1$\n",
"
Now, we compute $\\partial L_{EN}(\\boldsymbol{w})$ by:\n",
"
$\\partial L_{EN}(\\boldsymbol{w})=-X^T(\\boldsymbol{y}-X\\boldsymbol{w})+\\lambda_1 \\partial ||\\boldsymbol{w}||_1+\\lambda_2 \\boldsymbol{w}$\n",
"
such that\n",
"
$\\partial ||\\boldsymbol{w}||_1=[\\partial |w_0|,\\partial |w_1|,...,\\partial |w_{q-1}| ]^T$\n",
"
where $\\partial |w_i|=sign(w_i)$ if $w_i\\ne0$; otherwise $[-1,1]$ \n",
"
**Reminder:** We have data points $(\\boldsymbol{x}_i,y_i)$ where the first components of $\\boldsymbol{x}_i$ are one. Thus, the rows of matrix $X$ are composed of $\\boldsymbol{x}^T_i$ such that the first column of $X$ is all one. Vectors are denoted here by bold symbols, and they are all column vectors.\n",
"
In the following, we download the file *diabetes.csv*, which is our dataset, composing of 768 rows and 9 columns. Its last column holds the values of $y_i$, while the rest of columns holds the values of $\\boldsymbol{x}^T_i$. in fact, each row of the dataset is a data point $(\\boldsymbol{x}^T_i,y_i)$ \n",
" - First we load the dataset, and then normalize each column of its input data (excluding the last column).\n",
" - Next, the subgradient method is used for Elastic Net to estimate the parameters.\n",
" - For deeper discussion on subgradient method, see our post in Repository **Optimization**.\n",
" - Finally, we measure the accuracy of the model for *binary classification*.\n",
" \n",
"**Hint:** There are better subgradient-based methods for *Elastic Net* and *Lasso* such as **Coordinate Descent** that we will discuss in the future. \n",
"
\n", " | Pregnancies | \n", "Glucose | \n", "BloodPressure | \n", "SkinThickness | \n", "Insulin | \n", "BMI | \n", "DiabetesPedigreeFunction | \n", "Age | \n", "Outcome | \n", "
---|---|---|---|---|---|---|---|---|---|
0 | \n", "6 | \n", "148 | \n", "72 | \n", "35 | \n", "0 | \n", "33.6 | \n", "0.627 | \n", "50 | \n", "1 | \n", "
1 | \n", "1 | \n", "85 | \n", "66 | \n", "29 | \n", "0 | \n", "26.6 | \n", "0.351 | \n", "31 | \n", "0 | \n", "
2 | \n", "8 | \n", "183 | \n", "64 | \n", "0 | \n", "0 | \n", "23.3 | \n", "0.672 | \n", "32 | \n", "1 | \n", "
3 | \n", "1 | \n", "89 | \n", "66 | \n", "23 | \n", "94 | \n", "28.1 | \n", "0.167 | \n", "21 | \n", "0 | \n", "
4 | \n", "0 | \n", "137 | \n", "40 | \n", "35 | \n", "168 | \n", "43.1 | \n", "2.288 | \n", "33 | \n", "1 | \n", "