... I would suggest looking for ways to improve performance under normalization before giving in to denormalization for speed.
I would agree. IMO, far too many people underestimate maintainability problems with a difficult schema, normalized or not.
And as somebody pointed out, historical (read-only) data is sometimes easier to sift or use if denormalized. This is because most of such research is done from a customer sales perspective rather than concerned with internal processes about already-completed stuff.
This is actually true right now for the application I'm maintaining.
I've found normalizing wins points when it eliminates clearly duplicated and errant data. However, it is possible to hide the normalization in the interface rather than do it right down at the database. Whilst this makes for hairy code, it is doable and is sometimes the solution. But it sacrifices maintainability.
Then, too, I've found that completely normalising the data can sometimes make it harder to use the database. In the simplest case, code to fetch data might change from reading one or two tables to having to dance on 6 or 7. Often not a huge problem, true, until an update is required, too.
Wade.