{
"cells": [
{
"cell_type": "markdown",
"id": "dedefa38",
"metadata": {},
"source": [
"## Machine Learning\n",
"### Ridge regression with least squares\n",
"In **Ridge Regression** similar to the *linear regression*, we use a linear model for the the data points:\n",
"
$y=w_0+w_1x_1+w_2x_2+....w_{p-1}x_{p-1}$\n",
"
Having data points $(\\boldsymbol{x}_i,y_i)$ we want to find the best estimate for parameter vector $\\boldsymbol{w}$ using the **least squares method** augmented with a penalty term called **regularization term** as shown below:
\n",
"$L_{Ridge}(\\boldsymbol{w})=||\\boldsymbol{y}-X\\boldsymbol{w}||^2+\\lambda ||\\boldsymbol{w}||^2$\n",
"
Minimizing the loss function $L_{Ridge}(\\boldsymbol{w})$ leads to:
\n",
"$\\boldsymbol{w}=(X^TX+\\lambda I)^{-1}X^T\\boldsymbol{y}$
\n",
"where $I$ is the identity matrix, and $\\lambda\\ge0$ is the regularization (ridge) parameter.\n",
"
**Reminder:** The rows of matrix $X$ are composed of $\\boldsymbol{x}_i$ such the the first column is all one.\n",
"