3.4 Fisher information and Cramèr–Rao bound

It is important to know how good our estimators are. We would like our estimator to have as small a variance as possible. There is a useful lower bound on variances of the parameter estimators called the Cramèr–Rao bound, which is expressed in terms of the Fisher information matrix Γ (𝜃). For the signal h (t;𝜃 ), which depends on K parameters collected into the vector 𝜃 = (𝜃1,...,𝜃K ), the components of the matrix Γ (𝜃) are defined as
[∂ log Λ [x; 𝜃]∂ log Λ[x;𝜃 ]] [ ∂2log Λ[x;𝜃 ]] Γ (𝜃)ij := E ------------------------ = − E ------------- , i,j = 1,...,K. (56 ) ∂ 𝜃i ∂𝜃j ∂𝜃i∂𝜃j
The Cramèr–Rao bound states that for unbiased estimators the covariance matrix C (𝜃) of the estimators 𝜃 fulfills the inequality
C(𝜃) ≥ Γ (𝜃 )−1. (57 )
(The inequality A ≥ B for matrices means that the matrix A − B is nonnegative definite.)

A very important property of the ML estimators is that asymptotically (i.e., for a signal-to-noise ratio tending to infinity) they are (i) unbiased, and (ii) they have a Gaussian distribution with covariance matrix equal to the inverse of the Fisher information matrix.

In the case of Gaussian noise the components of the Fisher matrix are given by

| ( ∂h (t;𝜃 )|∂h(t;𝜃) ) Γ (𝜃)ij = --------||-------- , i,j = 1,...,K, (58 ) ∂ 𝜃i ∂𝜃j
where the scalar product (⋅|⋅) is defined in Eq. (45View Equation).


  Go to previous page Go up Go to next page