Go to previous page Go up Go to next page

4.2 Maximum a posteriori probability estimation

Suppose that in a given estimation problem we are not able to assign a particular cost function C (θ′,θ). Then a natural choice is a uniform cost function equal to 0 over a certain interval Iθ of the parameter θ. From Bayes theorem [20] we have
p(x,θ)π(θ)- p(θ|x ) = p(x) , (29 )
where p(x) is the probability distribution of data x. Then from Equation (26View Equation) one can deduce that for each data x the Bayes estimate is any value of θ that maximizes the conditional probability p(θ|x). The density p(θ|x ) is also called the a posteriori probability density of parameter θ and the estimator that maximizes p(θ|x) is called the maximum a posteriori (MAP) estimator. It is denoted by ˆ θMAP. We find that the MAP estimators are solutions of the following equation
∂ log p(x,θ) ∂ log π(θ) ------------= − ----------, (30 ) ∂θ ∂θ
which is called the MAP equation.
  Go to previous page Go up Go to next page