{
"cells": [
{
"cell_type": "markdown",
"id": "dba0131e",
"metadata": {},
"source": [
"## Machine Learning\n",
"### RBF network (with stochastic gradient descent)\n",
"The RBF network $F$ with a bias term $w_{K+1}$is expressed by:\n",
"
$\\large F(\\boldsymbol{x})=\\sum_{k=1}^K w_k \\phi_k(||\\boldsymbol{x}-\\boldsymbol{c}_k||)+w_{K+1}$ (1)\n",
"
where $\\boldsymbol{c}_k$ are $K$ distinct center points. And $w_{K+1}$ is the bias term, which is considerdd as a part of the weight vector $\\boldsymbol{w}=[w_1,w_2,...,w_K,w_{K+1}]^T$. Morover, we usually choose $\\phi_k(r)=exp(-\\frac{r^2}{2\\sigma_k^2})$\n",
"
The loss function for a single training sample $(\\boldsymbol{x}_i,yi)$ is defined by:\n",
"
$\\large L_i=\\frac{1}{2}(y_i-F(x_i))^2+\\frac{1}{2}\\lambda \\sum_{k=1}^K w_k^2$\n",
"
Then, the gradient of $L_i$ with respect to weight $w_k$ is:\n",
"
$\\large \\frac{\\partial L_i}{\\partial w_k}=(y_i-F(\\boldsymbol{x}_i))\\phi_k(||\\boldsymbol{x}_i-\\boldsymbol{c}_k||)+\\lambda w_k$ for $k=1,2,...,K$\n",
"
and for the bias term $w_{K+1}$\n",
"
$\\large \\frac{\\partial L_i}{\\partial w_{K+1}}=(y_i-F(\\boldsymbol{x}_i))$\n",
"
Then, according tothe stochastic gradeint descent (SGD), we update weights by:\n",
"
$\\large w_k\\leftarrow w_k-\\eta \\frac{\\partial L_i}{\\partial w_k}$ for $k=1,2,...,K,K+1$\n",
"