Computers are fast enough that for most things, micro-optimization is simply not needed. In which case what you optimize is programmer time and flexibility. And Alan's claim is that that is optimized by having many small functions that you can put together in flexible ways.

While Alan was talking about Lisp versus Pascal, you can see the same principle at work in the Unix toolset. The Unix toolset has many small tools (cat, diff, sort, grep, ...) for dealing with text that can be put together flexibly. What you can do with this is far more flexible and powerful than having many different kinds of data, each of which you can only use specific functions on.

Which is the problem that [link|/forums/render/user?username=admin|admin] complains about with IDEs that keep all of the code in a disk image. Before long you want to do all of the things that you are used to using the file-based Unix toolset for, and you can't because the datatype is wrong. So now you have to reinvent and relearn the toolset, which is a lot of work just to get where you already were.

Many small tools that can be put together in unanticipated ways beats having to reinvent the wheel for each datatype. (Until, that is, the deficiencies of your datatype rears its ugly head. For instance how many shell scripts break when encountering files with returns within the name of the file?)

Cheers,
Ben