I've said [link|http://use.perl.org/comments.pl?sid=20059&cid=30733|as much] elsewhere. (Though I can't admit to personally seeing a system with load over 100 that wasn't melting down. But I've seen an Oracle server where 40 was a light load..)
I would think, though, that with load that high you have other architectural problems. First of all on most systems (Linux 2.6 being a notable exception), the scheduler is going to start taking up considerable CPU. Secondly responsiveness is going to take a beating as processes wind up waiting a few hundred time slices before their turn comes up in the scheduler. Since the scheduler itself doesn't normally show up in CPU usage figures, you wouldn't get this reported. (In fact at some point reported CPU will drop and keep on dropping as actual work done bumps into CPU minus unreported kernel activity.) Responsiveness of high priority jobs (ie higher than the bulk of those spinning processes) might be unimpeded, but something has to show the fact of that contention somewhere.
IF the system and job are tasked correctly, none of this might matter. The system gets the job done just fine. But if you could change the software architecture to not put so much pressure on the scheduler, you'd probably get even more out of the same equipment.
Cheers,
Ben