Go to previous page Go up Go to next page

4.8 Algorithms to calculate the ℱ-statistic

4.8.1 The two-step procedure

In order to detect signals we search for threshold crossings of the ℱ-statistic over the intrinsic parameter space. Once we have a threshold crossing we need to find the precise location of the maximum of ℱ in order to estimate accurately the parameters of the signal. A satisfactory procedure is the two-step procedure. The first step is a coarse search where we evaluate ℱ on a coarse grid in parameter space and locate threshold crossings. The second step, called fine search, is a refinement around the region of parameter space where the maximum identified by the coarse search is located.

There are two methods to perform the fine search. One is to refine the grid around the threshold crossing found by the coarse search [70689188], and the other is to use an optimization routine to find the maximum of ℱ [49Jump To The Next Citation Point60Jump To The Next Citation Point]. As initial value to the optimization routine we input the values of the parameters found by the coarse search. There are many maximization algorithms available. One useful method is the Nelder–Mead algorithm [61] which does not require computation of the derivatives of the function being maximized.

4.8.2 Evaluation of the ℱ-statistic

Usually the grid in parameter space is very large and it is important to calculate the optimum statistic as efficiently as possible. In special cases the ℱ-statistic given by Equation (35View Equation) can be further simplified. For example, in the case of coalescing binaries ℱ can be expressed in terms of convolutions that depend on the difference between the time-of-arrival (TOA) of the signal and the TOA parameter of the filter. Such convolutions can be efficiently computed using Fast Fourier Transforms (FFTs). For continuous sources, like gravitational waves from rotating neutron stars observed by ground-based detectors [49Jump To The Next Citation Point] or gravitational waves form stellar mass binaries observed by space-borne detectors [60Jump To The Next Citation Point], the detection statistic ℱ involves integrals of the general form

∫ T0 ( ) x(t)m (t;ω, ˜ξμ) exp iωφmod (t;ξ˜μ ) exp (iωt )dt, (80 ) 0
where ˜ξμ are the intrinsic parameters excluding the frequency parameter ω, m is the amplitude modulation function, and ωφmod the phase modulation function. The amplitude modulation function is slowly varying comparing to the exponential terms in the integral (80View Equation). We see that the integral (80View Equation) can be interpreted as a Fourier transform (and computed efficiently with an FFT), if φmod = 0 and if m does not depend on the frequency ω. In the long-wavelength approximation the amplitude function m does not depend on the frequency. In this case Equation (80View Equation) can be converted to a Fourier transform by introducing a new time variable tb [87],
t := t + φ (t;ξ˜μ). (81 ) b mod
Thus in order to compute the integral (80View Equation), for each set of the intrinsic parameters ξ˜μ we multiply the data by the amplitude modulation function m, resample according to Equation (81View Equation), and perform the FFT. In the case of LISA detector data when the amplitude modulation m depends on frequency we can divide the data into several band-passed data sets, choosing the bandwidth for each set sufficiently small so that the change of m exp(iωφmod ) is small over the band. In the integral (80View Equation) we can then use as the value of the frequency in the amplitude and phase modulation function the maximum frequency of the band of the signal (see [60Jump To The Next Citation Point] for details).

4.8.3 Comparison with the Cramèr–Rao bound

In order to test the performance of the maximization method of the ℱ statistic it is useful to perform Monte Carlo simulations of the parameter estimation and compare the variances of the estimators with the variances calculated from the Fisher matrix. Such simulations were performed for various gravitational-wave signals [551949Jump To The Next Citation Point]. In these simulations we observe that above a certain signal-to-noise ratio, that we call the threshold signal-to-noise ratio, the results of the Monte Carlo simulations agree very well with the calculations of the rms errors from the inverse of the Fisher matrix. However, below the threshold signal-to-noise ratio they differ by a large factor. This threshold effect is well-known in signal processing [96]. There exist more refined theoretical bounds on the rms errors that explain this effect, and they were also studied in the context of the gravitational-wave signal from a coalescing binary [72Jump To The Next Citation Point]. Use of the Fisher matrix in the assessment of the parameter estimators has been critically examined in [95Jump To The Next Citation Point] where a criterion has been established for signal-to-noise ratio above which the inverse of the Fisher matrix approximates well covariance of the estimators of the parameters. UpdateJump To The Next Update Information

Here we present a simple model that explains the deviations from the covariance matrix and reproduces well the results of the Monte Carlo simulations. The model makes use of the concept of the elementary cell of the parameter space that we introduced in Section 4.5.2. The calculation given below is a generalization of the calculation of the rms error for the case of a monochromatic signal given by Rife and Boorstyn [81].

When the values of parameters of the template that correspond to the maximum of the functional ℱ fall within the cell in the parameter space where the signal is present, the rms error is satisfactorily approximated by the inverse of the Fisher matrix. However, sometimes as a result of noise the global maximum is in the cell where there is no signal. We then say that an outlier has occurred. In the simplest case we can assume that the probability density of the values of the outliers is uniform over the search interval of a parameter, and then the rms error is given by

Δ2 σ2out =---, (82 ) 12
where Δ is the length of the search interval for a given parameter. The probability that an outlier occurs will be the higher the lower the signal-to-noise ratio is. Let q be the probability that an outlier occurs. Then the total variance 2 σ of the estimator of a parameter is the weighted sum of the two errors
σ2 = σ2 q + σ2 (1 − q), (83 ) out CR
where σCR is the rms errors calculated from the covariance matrix for a given parameter. One can show [49Jump To The Next Citation Point] that the probability q can be approximated by the following formula:
∫ ∞ ( ∫ ℱ )Nc −1 q = 1 − p1(ρ,ℱ ) p0(y) dy dℱ , (84 ) 0 0
where p0 and p1 are the probability density functions of false alarm and detection given by Equations (46View Equation) and (47View Equation), respectively, and where N c is the number of cells in the parameter space. Equation (84View Equation) is in good but not perfect agreement with the rms errors obtained from the Monte Carlo simulations (see [49]). There are clearly also other reasons for deviations from the Cramèr–Rao bound. One important effect (see [72]) is that the functional ℱ has many local subsidiary maxima close to the global one. Thus for a low signal-to-noise the noise may promote the subsidiary maximum to a global one.
  Go to previous page Go up Go to next page