{
"cells": [
{
"cell_type": "markdown",
"id": "20fce00f",
"metadata": {},
"source": [
"### Machine Learning\n",
"#### Maximum Likelihood Estimation\n",
"Given a number of samples, say $n$, received from the environment, and knowing their joint probability function denoted by $f_n(\\boldsymbol{x};\\boldsymbol{w})$, we can define the **likelihood function** as shown below:\n",
"
$L(\\boldsymbol{x},\\boldsymbol{w})=f_n(\\boldsymbol{x};\\boldsymbol{w})$ (1)\n",
"
where $\\boldsymbol{w}$ is generally a vector of unknown parameters of the joint probability function $f_n(.;.)$.\n",
"
Assuming the samples are *independent and identically distributed* (i.i.d.) random variables, we can simplify Eq. (1) to this:\n",
"
$L(\\boldsymbol{x};\\boldsymbol{w})=\\prod_{i=1}^{n}f(x_i;\\boldsymbol{w})$ (2)\n",
"
where $f(x_i;\\boldsymbol{w})$ is the probability function of $x_i$ with parameter vector $\\boldsymbol{w}$ \n",
"
The **Maximum likelihood estimation** finds the optimal $\\boldsymbol{w}$ from:\n",
"
$\\boldsymbol{w}=argmax_\\boldsymbol{w} L(\\boldsymbol{x};\\boldsymbol{w})$ (3)\n",
"
If the likelihood function is *differentiable*, sufficient conditions for the **MLE** are:\n",
"
$\\frac{\\partial L}{\\partial w_0}=0$, $\\frac{\\partial L}{\\partial w_1}=0$,...,$\\frac{\\partial L}{\\partial w_{q-1}}=0$ (4)\n",
"
**Contents:**\n",
"- Estimation by the MLE for samples from a **continuous uniform distribution**.\n",
"- Estimation by the MLE for samples from a **normal distribution**.\n",
"