The two main codes for performing -body simulations are Kira and NBODYx. The Kira integrator is part of the Starlab environment which also includes stellar evolution codes and other modules for doing -body simulations . The NBODYx codes have been developed and improved by Aarseth since the early 1960’s. He has two excellent summaries of the general properties and development of the NBODYx codes [1, 2]. For further details, see Aarseth’s book on -body simulations . A good summary of general -body applications can also be found at the NEMO website . NBODY6++ is a parallelization of the NBODY6 code for use on large computer clusters , and a parallel version of Kira is under development . Most large -body calculations are done with a special purpose computer called the GRAPE (GRAvity PipE) invented by Makino . The most recent incarnation of the GRAPE is the GRAPE 6, which has a theoretical peak speed of 100 Tflops . There is also a PCI card version (GRAPE-6A) which is designed for use in PC clusters . The GRAPE calculates the accelerations and jerks for the interaction between each star in the cluster. The next generation GRAPE-DR, which could reach 1 Pflops, should be operational in about three years.
The main advantage of -body simulations is the small number of simplifying assumptions which must be made concerning the dynamical interactions within the cluster. The specific stars and trajectories involved in any interactions during the simulation are known. Therefore, the details for those specific interactions can be calculated during the simulation. Within the limits of the numerical errors that accumulate during the calculation , one can have great confidence in the results of -body simulations.
Obviously, one of the main computational difficulties is simply the CPU cost necessary to integrate the equations of motion for bodies. This scales roughly as . The other computational difficulty of the direct -body method is the wide range of precision required [115, 98]. Consider the range of distances, from the size of neutron stars ( 10 km) to the size of the globular cluster (), spanning 14 orders of magnitude. If the intent of the calculations is to determine the frequency of interactions with neutron stars, we have to know the relative position of every star to within 1 part in . The range of time scales is worse yet. Considering that the time for a close passage of two neutron stars is on the order of milliseconds and that the age of a globular cluster is , we find that the time scales span 20 orders of magnitude. These computational requirements coupled with hardware limitations mean that the number of bodies which can be included in a reasonable simulation is no more than . This is about an order of magnitude less than the number of stars in a typical globular cluster.
Although one has great confidence in the results of an -body simulation, these simulations are generally for systems that are smaller than globular clusters. Consequently, applications of -body simulations to globular cluster dynamics involve scaling lower simulations up to the globular cluster regime. Although many processes scale with , they do so in different ways. Thus, one scales the results of an -body simulation based upon the assumption of a dominant process. However, one can never be certain that the extrapolation is smooth and that there are no critical points in the scaling with . One can also scale other quantities in the model, so that the quantity of interest is correctly scaled . An understanding of the nature of the scaling is crucial to understanding the applicability of -body simulations to globular cluster dynamics (see Baumgardt  for an example). The scaling problem is one of the fundamental shortcomings of the -body approach.
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 2.0 Germany License.