### 3.4 Fisher information and Cramèr–Rao bound

It is important to know how good our estimators are. We would like our estimator to have as small a
variance as possible. There is a useful lower bound on variances of the parameter estimators called the
Cramèr–Rao bound, which is expressed in terms of the Fisher information matrix . For the signal
, which depends on parameters collected into the vector , the components of
the matrix are defined as
The Cramèr–Rao bound states that for unbiased estimators the covariance matrix of the
estimators fulfills the inequality
(The inequality for matrices means that the matrix is nonnegative definite.)
A very important property of the ML estimators is that asymptotically (i.e., for a signal-to-noise ratio
tending to infinity) they are (i) unbiased, and (ii) they have a Gaussian distribution with covariance matrix
equal to the inverse of the Fisher information matrix.

In the case of Gaussian noise the components of the Fisher matrix are given by

where the scalar product is defined in Eq. (45).