3.1 Hypothesis testing

The problem of detecting the signal in noise can be posed as a statistical hypothesis testing problem. The null hypothesis H0 is that the signal is absent from the data and the alternative hypothesis H1 is that the signal is present. A hypothesis test (or decision rule) δ is a partition of the observation set into two subsets, ℛ and its complement ℛ′. If data are in ℛ we accept the null hypothesis, otherwise we reject it. There are two kinds of errors that we can make. A type I error is choosing hypothesis H 1 when H0 is true and a type II error is choosing H0 when H1 is true. In signal detection theory the probability of a type I error is called the false alarm probability, whereas the probability of a type II error is called the false dismissal probability. 1 − (false dismissal probability) is the probability of detection of the signal. In hypothesis testing theory, the probability of a type I error is called the significance of the test, whereas 1 − (probability of type II error) is called the power of the test.

The problem is to find a test that is in some way optimal. There are several approaches to finding such a test. The subject is covered in detail in many books on statistics, for example, see [72, 51, 80, 83].

3.1.1 Bayesian approach

In the Bayesian approach we assign costs to our decisions; in particular we introduce positive numbers Cij, i,j = 0,1, where Cij is the cost incurred by choosing hypothesis Hi when hypothesis Hj is true. We define the conditional risk R of a decision rule δ for each hypothesis as

′ Rj (δ) = C0jPj (ℛ ) + C1jPj (ℛ ), j = 0,1, (35 )
where Pj is the probability distribution of the data when hypothesis Hj is true. Next, we assign probabilities π0 and π1 = 1 − π0 to the occurrences of hypotheses H0 and H1, respectively. These probabilities are called a priori probabilities or priors. We define the Bayes risk as the overall average cost incurred by the decision rule δ:
r(δ) = π0R0 (δ) + π1R1(δ). (36 )
Finally we define the Bayes rule as the rule that minimizes the Bayes risk r(δ).

3.1.2 Minimax approach

Very often in practice we do not have control over or access to the mechanism generating the state of nature and we are not able to assign priors to various hypotheses. In such a case one criterion is to seek a decision rule that minimizes, over all δ, the maximum of the conditional risks, R (δ ) 0 and R (δ) 1. A decision rule that fulfills that criterion is called a minimax rule.

3.1.3 Neyman–Pearson approach

In many problems of practical interest the imposition of a specific cost structure on the decisions made is not possible or desirable. The Neyman–Pearson approach involves a trade-off between the two types of errors that one can make in choosing a particular hypothesis. The Neyman–Pearson design criterion is to maximize the power of the test (probability of detection) subject to a chosen significance of the test (false alarm probability).

3.1.4 Likelihood ratio test

It is remarkable that all three very different approaches – Bayesian, minimax, and Neyman–Pearson – lead to the same test called the likelihood ratio test [44Jump To The Next Citation Point]. The likelihood ratio Λ is the ratio of the pdf when the signal is present to the pdf when it is absent:

p1(x) Λ(x ) := p-(x)-. (37 ) 0
We accept the hypothesis H1 if Λ > k, where k is the threshold that is calculated from the costs Cij, priors πi, or the significance of the test depending on which approach is being used.
  Go to previous page Go up Go to next page