We assign a cost function of estimating the true value of as . We then associate with an estimator a conditional risk or cost averaged over all realizations of data for each value of the parameter :a priori probability distribution of the parameter . We then define the Bayes estimator as the estimator that minimizes the average risk defined as a priori distribution , and is the set of observations of the parameter . It is not difficult to show that for a commonly used cost function
Suppose that in a given estimation problem we are not able to assign a particular cost function . Then a natural choice is a uniform cost function equal to 0 over a certain interval of the parameter . From Bayes theorem  we havea posteriori probability density of parameter and the estimator that maximizes is called the maximum a posteriori (MAP) estimator. It is denoted by . We find that the MAP estimators are solutions of the following equation MAP equation.
Often we do not know the a priori probability density of a given parameter and we simply assign to it a uniform probability. In such a case maximization of the a posteriori probability is equivalent to maximization of the probability density treated as a function of . We call the function the likelihood function and the value of the parameter that maximizes the maximum likelihood (ML) estimator. Instead of the function we can use the function (assuming that ). is then equivalent to the likelihood ratio [see Eq. (37)] when the parameters of the signal are known. Then the ML estimators are obtained by solving the equationML equation.
Living Rev. Relativity 15, (2012), 4
This work is licensed under a Creative Commons License.