OTOH, I have a metric assload of data that took a couple weeks to amass - this I've been keeping in a pg_dump generated file.

What about deployment - is this still the way to go for production? It kind of looks like pg_dump to archive file with pg_restore gives quite a bit more flexibility - but I am suspicious of binary formats.
As noted above, I use pg_dump to dump my output to a text file. The --inserts flag tells pg_dump to create INSERT INTO ... statements for each row instead of a tab-delimited format, or other format. I prefer to have SQL statements because I can read them alot easier than I can read tab-delimited files. Maybe I'm just set in my ways and I need to upgrade...But I'll take the performance degradation that goes along with INSERT statements in favor of something that I *could* read more easily if I *had* to.

Importing the output of my pg_dump command into a new copy of my database is a simple \\i filename.sql at the Postgres CLI.