Go to previous page Go up Go to next page

4.5 False alarm and detection probabilities – Gaussian case

4.5.1 Statistical properties of the ℱ-statistic

We first present the false alarm and detection pdfs when the intrinsic parameters of the signal are known. In this case the statistic ℱ is a quadratic form of the random variables that are correlations of the data. As we assume that the noise in the data is Gaussian and the correlations are linear functions of the data, ℱ is a quadratic form of the Gaussian random variables. Consequently ℱ-statistic has a distribution related to the 2 χ distribution. One can show (see Section III B in [49Jump To The Next Citation Point]) that for the signal given by Equation (14View Equation), 2 ℱ has a 2 χ distribution with 4 degrees of freedom when the signal is absent and noncentral χ2 distribution with 4 degrees of freedom and non-centrality parameter equal to signal-to-noise ratio (h|h) when the signal is present.

As a result the pdfs p0 and p1 of ℱ when the intrinsic parameters are known and when respectively the signal is absent and present are given by

ℱn∕2−1 p0(ℱ ) = ----------exp (− ℱ ), (46 ) (n∕2 − 1)! ( ) (2ℱ-)(n∕2−1)∕2- ( √ ---) 1-2 p1(ρ,ℱ ) = ρn∕2−1 In∕2−1 ρ 2ℱ exp − ℱ − 2ρ , (47 )
where n is the number of degrees of freedom of χ2 distributions and In∕2−1 is the modified Bessel function of the first kind and order n ∕2 − 1. The false alarm probability PF is the probability that ℱ exceeds a certain threshold ℱ0 when there is no signal. In our case we have
∫ ∞ n∑∕2−1 ℱ k PF(ℱ0 ) := p0(ℱ) dℱ = exp(− ℱ0) -0-. (48 ) ℱ0 k=0 k!
The probability of detection PD is the probability that ℱ exceeds the threshold ℱ0 when the signal-to-noise ratio is equal to ρ:
∫ ∞ PD (ρ,ℱ0) := ℱ p1(ρ,ℱ ) dℱ . (49 ) 0
The integral in the above formula can be expressed in terms of the generalized Marcum Q-function [9444], Q (α,β ) = PD(α, β2∕2). We see that when the noise in the detector is Gaussian and the intrinsic parameters are known, the probability of detection of the signal depends on a single quantity: the optimal signal-to-noise ratio ρ.

4.5.2 False alarm probability

Next we return to the case when the intrinsic parameters ξ are not known. Then the statistic ℱ(ξ ) given by Equation (35View Equation) is a certain generalized multiparameter random process called the random field (see Adler’s monograph [4] for a comprehensive discussion of random fields). If the vector ξ has one component the random field is simply a random process. For random fields we can define the autocovariance function 𝒞 just in the same way as we define such a function for a random process:

′ ′ ′ 𝒞(ξ,ξ ) := E0[ℱ (ξ)ℱ (ξ )] − E0 [ℱ (ξ)]E0[ℱ (ξ )], (50 )
where ξ and ξ′ are two values of the intrinsic parameter set, and E0 is the expectation value when the signal is absent. One can show that for the signal (14View Equation) the autocovariance function 𝒞 is given by
′ 1 ( T −1 ′− 1) 𝒞(ξ, ξ) = --tr Q ⋅ M ⋅ Q ⋅ M , (51 ) 4
where
(k)(l) ( (k) (l) ′) ′(k)(l) ( (k) ′ (l) ′ ) Q := h (t;ξ)|h (t;ξ ) , M := h (t;ξ )|h (t;ξ ) . (52 )
We have 𝒞 (ξ, ξ) = 1.

One can estimate the false alarm probability in the following way [50Jump To The Next Citation Point]. The autocovariance function 𝒞 tends to zero as the displacement ′ Δ ξ = ξ − ξ increases (it is maximal for Δ ξ = 0). Thus we can divide the parameter space into elementary cells such that in each cell the autocovariance function 𝒞 is appreciably different from zero. The realizations of the random field within a cell will be correlated (dependent), whereas realizations of the random field within each cell and outside the cell are almost uncorrelated (independent). Thus the number of cells covering the parameter space gives an estimate of the number of independent realizations of the random field. The correlation hypersurface is a closed surface defined by the requirement that at the boundary of the hypersurface the correlation 𝒞 equals half of its maximum value. The elementary cell is defined by the equation

′ 1- 𝒞 (ξ, ξ) = 2 (53 )
for ξ at cell center and ′ ξ on cell boundary. To estimate the number of cells we perform the Taylor expansion of the autocorrelation function up to the second-order terms:
′ || 2 ′|| 𝒞(ξ, ξ′) ∼= 1 + ∂-𝒞(ξ,ξ-)| Δξi + 1-∂-𝒞(ξ,ξ-)| Δ ξiΔξj. (54 ) ∂ξ′i |ξ′=ξ 2 ∂ξ′i∂ ξ′j |ξ′= ξ
As 𝒞 attains its maximum value when ′ ξ − ξ = 0, we have
| ∂𝒞(ξ, ξ′)| -----′--|| = 0. (55 ) ∂ξi ξ′=ξ
Let us introduce the symmetric matrix
1 ∂2𝒞(ξ, ξ′)|| Gij := −------′--′-|| . (56 ) 2 ∂ ξi∂ξj ξ′=ξ
Then the approximate equation for the elementary cell is given by
Gij Δ ξiΔ ξj = 1-. (57 ) 2
It is interesting to find a relation between the matrix G and the Fisher matrix. One can show (see [60Jump To The Next Citation Point], Appendix B) that the matrix G is precisely equal to the reduced Fisher matrix ˜Γ given by Equation (43View Equation).

Let K be the number of the intrinsic parameters. If the components of the matrix G are constant (independent of the values of the parameters of the signal) the above equation is an equation for a hyperellipse. The K-dimensional Euclidean volume Vcell of the elementary cell defined by Equation (57View Equation) equals

(π∕2)K∕2 Vcell = -----------√------, (58 ) Γ (K ∕2 + 1) detG
where Γ denotes the Gamma function. We estimate the number N c of elementary cells by dividing the total Euclidean volume V of the K-dimensional parameter space by the volume Vcell of the elementary cell, i.e. we have
V Nc = ----. (59 ) Vcell
The components of the matrix G are constant for the signal h(t;A0,φ0, ξμ) = A0 cos(φ(t;ξμ) − φ0) when the phase φ (t;ξμ) is a linear function of the intrinsic parameters ξμ.

To estimate the number of cells in the case when the components of the matrix G are not constant, i.e. when they depend on the values of the parameters, we write Equation (59View Equation) as

∫ Γ-(K-∕2-+-1)- √ ------ Nc = (π ∕2)K∕2 V detG dV. (60 )
This procedure can be thought of as interpreting the matrix G as the metric on the parameter space. This interpretation appeared for the first time in the context of gravitational-wave data analysis in the work by Owen [74Jump To The Next Citation Point], where an analogous integral formula was proposed for the number of templates needed to perform a search for gravitational-wave signals from coalescing binaries.

The concept of number of cells was introduced in [50] and it is a generalization of the idea of an effective number of samples introduced in [36] for the case of a coalescing binary signal.

We approximate the probability distribution of ℱ (ξ) in each cell by the probability p0(ℱ ) when the parameters are known [in our case by probability given by Equation (46View Equation)]. The values of the statistic ℱ in each cell can be considered as independent random variables. The probability that ℱ does not exceed the threshold ℱ 0 in a given cell is 1 − P (ℱ ) F 0, where P (ℱ ) F 0 is given by Equation (48View Equation). Consequently the probability that ℱ does not exceed the threshold ℱ0 in all the Nc cells is [1 − PF(ℱ0 )]Nc. The probability PFT that ℱ exceeds ℱ0 in one or more cell is thus given by

PTF (ℱ0 ) = 1 − [1 − PF (ℱ0)]Nc. (61 )
This by definition is the false alarm probability when the phase parameters are unknown. The number of false alarms NF is given by
T NF = NcP F (ℱ0 ). (62 )
A different approach to the calculation of the number of false alarms using the Euler characteristic of level crossings of a random field is described in [49Jump To The Next Citation Point].

It was shown (see [29Jump To The Next Citation Point]) that for any finite ℱ0 and Nc, Equation (61View Equation) provides an upper bound for the false alarm probability. Also in [29] a tighter upper bound for the false alarm probability was derived by modifying a formula obtained by Mohanty [68Jump To The Next Citation Point]. The formula amounts essentially to introducing a suitable coefficient multiplying the number of cells Nc.

4.5.3 Detection probability

When the signal is present a precise calculation of the pdf of ℱ is very difficult because the presence of the signal makes the data random process x(t) non-stationary. As a first approximation we can estimate the probability of detection of the signal when the parameters are unknown by the probability of detection when the parameters of the signal are known [given by Equation (49View Equation)]. This approximation assumes that when the signal is present the true values of the phase parameters fall within the cell where ℱ has a maximum. This approximation will be the better the higher the signal-to-noise ratio ρ is.


  Go to previous page Go up Go to next page