4.7 Network detection

Gravitational wave detectors are almost omni-directional. As discussed in Section 4.2.1, both interferometers and bars have good sensitivity over a large area of the sky. In this regard, gravitational wave antennas are unlike conventional astronomical telescopes, e.g., optical, radio, or infrared bands, which observe only a very small fraction of the sky at any given time. The good news is that gravitational wave interferometers will have good sky coverage and therefore only a small number (around six) are enough to survey the sky. The bad news, however, is that gravitational wave observations will not automatically provide the location of the source in the sky. It will either be necessary to observe the same source in several non–co-located detectors and triangulate the position of the source using the information from the delay in the arrival times of the signals to different detectors, or observe for a long time and use the location-dependent Doppler modulation caused by the motion of the detector relative to the source to infer the source’s position in the sky. The latter is a well-known technique in radio astronomy of synthesizing a long-baseline observation to gain resolution, and only possible for sources, such as rotating neutron stars or stochastic backgrounds, that last for a long enough duration.

A network of detectors is, therefore, essential for source reconstruction. Network observation is not only powerful in identifying a source in the sky, but independent observation of the same source in several detectors adds to the detection confidence, especially since the noise background in the first generation of interferometers is not well understood and is plagued by nonstationarity and non-Gaussianity.

4.7.1 Coherent vs coincidence analysis

The availability of a network of detectors offers two different methods by which the data can be combined. One can either first bring the data sets together, combine them in a certain way, and then apply the appropriate filter to the network data and integrate the signal coherently, coherent detection [28389160Jump To The Next Citation Point43], or first analyze the data from each detector separately by applying the relevant filters and then look for coincidences in the multi-dimensional space of intrinsic (masses of the component stars, their spins, ...) and extrinsic (arrival times, a constant phase, source location, ...) parameters, coincidence detection [206208160443552Jump To The Next Citation Point6Jump To The Next Citation Point7Jump To The Next Citation Point8Jump To The Next Citation Point].

A recent comparison of coherent analysis vis-a-vis coincidence analysis under the assumption that the background noise is Gaussian and stationary has concluded that coherent analysis, as one might expect, is far better than coincidence analysis [265]. These authors also explore, to a limited extent, the effect of nonstationary noise and reach essentially the same conclusion.

At the outset, coherent analysis sounds like a good idea, since in a network of ND similar detectors the visibility of a signal improves by a factor of √ ---- ND over that of a single detector. One can take advantage of this enhancement in SNR to either lower the false alarm rate by increasing the detection threshold, while maintaining the same detection efficiency, or improve detection efficiency at a given false alarm rate.

However, there are two reasons that current data-analysis pipelines prefer coincidence analysis over coherent analysis. Firstly, since the detector noise is neither Gaussian nor stationary, coincidence analysis can potentially reduce the background rate far greater than one might think otherwise. Secondly, coherent analysis is computationally far more expensive than coincidence analysis and it is presently not practicable to employ coherent analysis.

Coincidence analysis is indeed a very powerful method to veto out spurious events. One can associate with each event in a given detector an ellipsoid, whose location and orientation depends on where in the parameter space and when the event was found, and the SNR can be used to fix the size of the ellipsoid [316]. One is associating with each event a ‘sphere’ of influence in the multi-dimensional space of masses, spins, arrival times, etc., and there is a stringent demand that the spheres associated with events from different detectors should overlap each other in order to claim a detection. Since random triggers from a network of detectors are less likely to be consistent with one another, this method serves as a very powerful veto.

It is probably not possible to infer beforehand which method might be more effective in detecting a source, as this might depend on the nature of the detector noise, on how the detection statistic is constructed, etc. An optimal approach might be a suitable combination of both of these methods. For instance, a coherent follow-up of a coincidence analysis (as is currently done by searches for compact binaries within the LSC) or to use coincidence criteria on candidate events from a coherent search.

Coherent addition of data improves the visibility of the signal, but ‘coherent subtraction’ of the data in a detector network should lead to data products that are devoid of gravitational wave signals. This leads us naturally to the introduction of the null stream veto.

4.7.2 Null stream veto

Data from a network of detectors, when suitably shifted in time and combined linearly with coefficients that depend on the source location, will yield a time series that, in the ideal case, will be entirely be devoid of the gravitational signal. Such a combination is called a null stream. For instance, for a set of three misaligned detectors, each measuring a data stream xk(t), k = 1,2,3, the combination x (t) = A23(𝜃,φ )h1(t + τ1) + A31(𝜃,φ )h2(t + τ2) + A12(𝜃,φ )h3 (t + τ3), where Aij are functions of the responses of the antennas i and j, and τk’s, k = 1, 2,3, are time delays that depend on the source location and the location of the antenna, is a null stream. If xk (t), k = 1,2,3, contain a gravitational wave signal from an astronomical source, then x(t) will not contain the signature of this source. In contrast, if x(t) and xk(t) both contain the signature of a gravitational wave event, then that is an indication that one of the detectors has a glitch.

The existence and usefulness of a null stream was first pointed out by Gürsel and Tinto [186]. Wen and Schutz [390] proposed implementing it in LSC data analysis as a veto, and this has been taken up now by several search groups.

4.7.3 Detection of stochastic signals by cross-correlation

Stochastic background sources and their detection is discussed in more detail in Section 8. Here we will briefly mention the problem in the context of detector networks. As mentioned in Section 3.6, the universe might be filled with stochastic gravitational waves that were either generated in the primeval universe or by a population of background sources. For point sources, although each source in a population might not be individually detectable, they could collectively produce a confusion background via a random superposition of the waves from that population. Since the waves are random in nature, it is not possible to use the techniques described in Sections 4.7.1, 4.7.2 and 5.1 to detect a stochastic background. However, we might use the noisy stochastic signal in one of the detectors as a “matched-filter” for the data in another detector [362Jump To The Next Citation Point16330Jump To The Next Citation Point93]. In other words, it should be possible to detect a stochastic background by cross-correlating the data from a pair of detectors; the common gravitational-wave background will grow in time more rapidly than the random backgrounds in the two instruments, thereby facilitating the detection of the background.

If two instruments with identical spectral noise density S h are cross-correlated over a bandwidth Δf for a total time T, the spectral noise density of the output is reduced by a factor of (T Δf )1∕2. Since the noise amplitude is proportional to the square root of Sh, the amplitude of a signal that can be detected by cross-correlation improves only with the fourth root of the observing time. This should be compared with the square root improvement that matched filtering gives.

The cross-correlation technique works well when the two detectors are situated close to one another. When separated, only those waves whose wavelength is larger than or comparable to the distance between the two detectors, or which arrive from a direction perpendicular to the separation between the detectors, can contribute coherently to the cross-correlation statistic. Since the instrumental noise builds up rapidly at lower frequencies, detectors that are farther apart are less useful in cross-correlation. However, very near-by detectors (as in the case of two LIGO detectors within the same vacuum tube in Hanford) will suffer from common background noise from the control system and the environment, making it rather difficult to ascertain if any excess noise is due to a stochastic background of gravitational waves.


  Go to previous page Go up Go to next page