I think you might've hit on it. I seem to recall that so-called "integer" fields in languages like COBOL were really BCD fields.

BCD math (especially when the processor does it) has the advantage that you can just add more bytes to support a larger value. This avoids integer overflow problems, or at least makes them intuitive, particularly for non-programmers. Add decimal support and you've also avoided the problem of using floats for currency.

(I had a serious discussion many many years ago about the error characteristics of radix-2 floating point numbers and why you didn't want to use them for currency. Turns out there's a radix-10 version of IEEE-754 which is intended for currency. But microcomputers of the time all used radix-2 and therefore programs like Excel did, too.)

Wade.