(note: if I read correctly, this is referring to JDK 1.3, so it's old, which means that things have only gotten better for the garbage collector.)
Incremental "Pauseless" Garbage Collector
The mark-compact collector does not eliminate all user-perceivable pauses. User-perceived garbage collector pauses occur when "old" objects (objects that have "lived" for a while in machine terms) need to be garbage collected, and these pauses are proportional to the amount of live object data that exists. This means that the pauses can become arbitrarily large as more data is manipulated, which is a very undesirable property for server applications, animation, or other soft real-time applications.
The Java HotSpot VM provides an alternative old-space garbage collector to solve this problem. This collector is fully incremental, eliminating user-detectable garbage collection pauses. This incremental collector scales smoothly, providing relatively constant pause times even when extremely large object data sets are being manipulated. This provides excellent behavior for:
* Server applications, especially high-availability applications
* Applications that manipulate very large "live" object data sets
* Applications where all user-noticeable pauses are undesirable, such as games, animation, or other highly interactive applications
The pauseless collector works by using an incremental old space collection scheme referred to academically as the "train" algorithm. This algorithm breaks up old space collection pauses into many tiny pauses (typically less than ten milliseconds) that can be spread out over time so that the program virtually never appears to a user to pause. Since the train algorithm is not a hard real-time algorithm, it cannot guarantee an upper limit on pause times; however, in practice, much larger pauses are extremely rare, and are not caused directly by large data sets.
The pauseless collector also has the highly desirable side-benefit of producing improved memory locality. This happens because the algorithm works by attempting to relocate groups of tightly-coupled objects into regions of adjacent memory, providing excellent paging and cache locality properties for those objects. This can also benefit highly multithreaded applications that operate on distinct sets of object data.