* See "Object Oriented Programming Oversold!" by B. Jacobs, [link|http://www.geocities.com/tablizer/oopbad.htm|http://www.geocities...blizer/oopbad.htm]
The longer context is as follows:
[link|http://www.oreilly.com/catalog/oraclep3/|Pontifications]
I have to confess that I started programming before object approaches made any kind of inroads into business application development. I think I'm still waiting for that to happen.
Over the years, I've seen no compelling evidence that any particular programming style has a monopoly on the fundamental things we care about - fidelity to requirements, performance efficiency, developer effectiveness, and system reliability. I have seen a lot of fads, bandwagons, hand-waving, and unsupported assumptions (OK, I'm probably not entirely innocent myself), and object-oriented programming seems to attract quite a lot of it. That isn't to say that OOP fails to help you solve problems; it's just that OOP is not the magic bullet that many would have you believe.
Take, for example, the principle of object-based decomposition, particularly as it tends to generate inheritance hierarchies. By accurately modeling objects as they exist in the real world, software artifacts should be easier to comprehend, faster to assemble, and more amenable to large-scale system development. Sounds fabulous, doesn't it? Well, there are a lot of different ways to decompose something drawn from the real world.* It is a rare taxonomy that can exist in a simple hierarchy. My library catalog hierarchy could have been decomposed according to, say, media (print versus audio tape versus digital format....). And although Oracle provides wonderful tools for type evolution, it may still be so painful to make sweeping changes in a tyoe hierarchy that it will never happen. This isn't really the tool's fault; reality has a way of circumventing even the best-laid plans.
Nor is it clear that co-locating the programming logic (methods) with the data (attributes) in an abstract datatype yields any measurable benefits. It looks reasonable and makes for some great sound bites, but how exactly will coupling data and behavior be better than keeping data structures (logical and physical table design) seperate from processes (procedures, functions, packages)? Many development methods acknowledge that an organization's business data structures have a much slower rate of change than do the algorithms that manipulate them. It is a design truism (even for OOP) that the more volatile elements of a a system should be kept seperate from the more stable elements.
There is considerable inconsistency on this last point. Rich and famous object evangelists, while emphasizing the value of bundling data with behaviors, simultaneously promote a "model-view-controller" approach that "seperates business logic from data." Are these emperors wearing clothes or not?
Many OOP proponents have argued for years that its greatest benefit is the reuse of software. It has been said so many times that it must be true! Unfortunately, few observers have hard evidence for this, in part because there is no consensus on what constitutes "reuse". Even object apologists began promoting higher-level "components" (whatever those may be) as a preferred unit of reuse precisely because objects proved very difficult to fit into situations beyond those for which they were designed. My sense is that OOP results in no more code reuse than well-designed subroutines.