It is usual to characterize the intensity of a random field of gravitational waves by its energy density as a function of frequency. Since the energy density of a plane wave is the same as its flux (when ), we have from Equation (17)

But the wave field in this case is a random variable, so we must replace by a statistical mean square amplitude per unit frequency (Fourier transform power per unit frequency) called , so that the energy density per unit frequency is proportional to . It is then conventional to talk about the energy density per unit logarithm of the frequency, which means multiplying by . The result, after being careful about averaging over all directions of the waves and all independent polarization components, is [30, 362]

Finally, what is of most interest is the energy density as a fraction of the closure or critical cosmological density, given by the Hubble constant as . The resulting ratio is called :

The only tight constraint on from non–gravitational-wave astronomy is that it must be smaller
than 10^{–5}, in order not to disturb the agreement between the standard Big Bang model of nucleosynthesis
(of helium and other light elements) and observation. If the universe contains this much gravitational
radiation today, then at the time of nucleosynthesis the (blue-shifted) energy density of this radiation would
have been comparable to that of the photons and the three neutrino species. Although the
radiation would not have participated in the nuclear reactions, its extra energy density would have
required that the expansion rate of the universe at that time be significantly faster, in order to
evolve into the universe we see today. In turn, this faster expansion would have provided less
time for the nuclear reactions to “freeze out”, altering the abundances from the values that
are observed today [282, 349]. First-generation interferometers should be able to set direct
limits on the cosmological background at around this level. Radiation in the lower-frequency
LISA band, from galactic and extra-galactic binaries, is expected to be much smaller than this
bound.

Random radiation is indistinguishable from instrumental noise in a single detector, at least for short observing times. If the random field is produced by an anisotropically-distributed set of astrophysical sources (the binaries in our galaxy, for example) then over a year, as the detector changes its orientation, the noise from this background should rise and fall in a systematic way, allowing it to be identified. But this is a rather crude way of detecting the radiation, and a better way is to perform a cross-correlation between two detectors, if available.

In cross-correlation, which amounts to multiplying the outputs and integrating, the random signal in one detector essentially acts as a template for the signal in the other detector. If they match, then there will be a stronger-than-expected correlation. Notice that they can only match well if the wavelength of the gravitational waves is longer than the separation between the detectors: otherwise time delays for waves reaching one detector before the other degrade the match. The outcome is not like standard matched filtering, however, since the “filter” of the first detector has as much noise superimposed on its template as the other detector. As a result, the amplitude SNR of the correlated field grows only with observing time as , rather than the square root growth that characterizes matched filtering [362].

http://www.livingreviews.org/lrr-2009-2 |
This work is licensed under a Creative Commons License. Problems/comments to |