24 bits are probably* enough to display any image as well as can be done on a normal desktop computer. But if your goal is really high quality images, you really want more bits when processing the image. Generally 24 bit images consist of 8 bit values for red, green and blue. This means only 256 values for each component, which means error can accumulate when multiple calculations are done to an image before it is displayed.
The real reason the move to 64 bit is largly not revelent to graphics is that the high performance internal calculations that can benefit from being done with more bits often already are. All of the current x86 processors have extensions that let some calculations be done with more then 32 bits already. There are several variations, but one example the Streaming SIMD Extensions 2 found on Pentium 4 processors can do up to 128 bit math.
Jay
*I say probably because there are complications that make it hard to be sure. Different people have different degrees of sensitivity to different parts of the spectrum, distortion caused by light sources other then the moniter, differences in the exact spectrums produced by different moniters and other messy factors make a precise calculation hard.