{
"cells": [
{
"cell_type": "markdown",
"id": "dedefa38",
"metadata": {},
"source": [
"## Machine Learning\n",
"### Gradient Descent for Linear and Ridge Regression\n",
"In **Gradient Descent**, we move in the negative of gradient of the loss function in order to find the parameters that make the loss minimum. So, if the loss function is $L(\\boldsymbol{w})$, then we update the parameter vector $\\boldsymbol{w}$ by:\n",
"
$\\boldsymbol{w}\\leftarrow \\boldsymbol{w}-\\eta_k\\nabla L(\\boldsymbol{w})$\n",
"
where $\\eta_k>0$ is the **learning rate** (also called *step size*) at time step $k$.\n",
"
In **Ridge regression**, we saw that we use the following loss function in which $\\frac{1}{2}$ is applied to make the fomulas simpler:\n",
"
$L_{Ridge}(\\boldsymbol{w})=\\frac{1}{2}||\\boldsymbol{y}-X\\boldsymbol{w}||^2+\\frac{\\lambda}{2} ||\\boldsymbol{w}||^2$\n",
"
If we set $\\lambda$ to zero, we get to the **linear regression**. Now, we compute $\\nabla L(\\boldsymbol{w})$ by:\n",
"
$\\nabla L(\\boldsymbol{w})=-X^T(\\boldsymbol{y}-X\\boldsymbol{w})+\\lambda \\boldsymbol{w}$\n",
"
**Reminder:** We have data points $(\\boldsymbol{x}_i,y_i)$ where the first components of $\\boldsymbol{x}_i$ are one. Thus, the rows of matrix $X$ are composed of $\\boldsymbol{x}^T_i$ such that the first column of $X$ is all one. Vectors denoted by bold symbols here are all column vectors.\n",
"
In the following, \n",
" - Gradient Descent (GD) for linear regression is implemented and tested for noisy data points of a line. \n",
" - Then, Gradient Descent is implemented for ridge regression over noisy data points of a quadratic curve.\n",
"\n",
"