### 3.1 Hypothesis testing

The problem of detecting the signal in noise can be posed as a statistical hypothesis testing problem.
The null hypothesis is that the signal is absent from the data and the alternative hypothesis is
that the signal is present. A hypothesis test (or decision rule) is a partition of the observation set into
two subsets, and its complement . If data are in we accept the null hypothesis, otherwise we
reject it. There are two kinds of errors that we can make. A type I error is choosing hypothesis when
is true and a type II error is choosing when is true. In signal detection theory the
probability of a type I error is called the false alarm probability, whereas the probability of a type II error is
called the false dismissal probability. is the probability of
detection of the signal. In hypothesis testing theory, the probability of a type I error is called the
significance of the test, whereas is called the power of the
test.
The problem is to find a test that is in some way optimal. There are several approaches to
finding such a test. The subject is covered in detail in many books on statistics, for example,
see [72, 51, 80, 83].

#### 3.1.1 Bayesian approach

In the Bayesian approach we assign costs to our decisions; in particular we introduce positive numbers
, , where is the cost incurred by choosing hypothesis when hypothesis
is true. We define the conditional risk of a decision rule for each hypothesis as

where is the probability distribution of the data when hypothesis is true. Next, we assign
probabilities and to the occurrences of hypotheses and , respectively. These
probabilities are called a priori probabilities or priors. We define the Bayes risk as the overall average cost
incurred by the decision rule :
Finally we define the Bayes rule as the rule that minimizes the Bayes risk .

#### 3.1.2 Minimax approach

Very often in practice we do not have control over or access to the mechanism generating the state of nature
and we are not able to assign priors to various hypotheses. In such a case one criterion is to seek a decision
rule that minimizes, over all , the maximum of the conditional risks, and . A decision
rule that fulfills that criterion is called a minimax rule.

#### 3.1.3 Neyman–Pearson approach

In many problems of practical interest the imposition of a specific cost structure on the decisions made is
not possible or desirable. The Neyman–Pearson approach involves a trade-off between the two types of
errors that one can make in choosing a particular hypothesis. The Neyman–Pearson design criterion is to
maximize the power of the test (probability of detection) subject to a chosen significance of the test (false
alarm probability).

#### 3.1.4 Likelihood ratio test

It is remarkable that all three very different approaches – Bayesian, minimax, and Neyman–Pearson – lead
to the same test called the likelihood ratio test [44]. The likelihood ratio is the ratio of the pdf when
the signal is present to the pdf when it is absent:

We accept the hypothesis if , where is the threshold that is calculated from the
costs , priors , or the significance of the test depending on which approach is being
used.