The daemon approach I suggested was a take-off on FastCGI, and now that I think about it, you will probably want to have the receiver process write to the filesystem-based log/DB file and have an UPDATEr process that reads that log/DB file and does the Oracle updates from there.

I like the double writing because the odds of losing your Oracle connection are pretty high compared with not being able to write on your local box (in which case you're pretty much in the toilet anyway).

Farming this out to separate processes also gets the 5+ hours of Oracle UPDATEs out of your shuttling loop. This becomes critical when the connection fails or slows-down. The local logging gives you a chance to replay transactions when the connection comes back.

Serializing your Oracle UPDATEs should reduce contention at the Oracle server end, as well.

BTW, there are plenty of neat opportunities for "Death Spirals" if you keep the UPDATEs in-line. My favorite would be the one where somebody starts a long-running query against your table, Oracle struggles to maintain a consistent view by doing it's hocus-pocus with the rollback segments while the UPDATEs roll-in, DB server performance lags, UPDATEs take longer, either files don't get shuttled because the shuttler is waiting on the UPDATE, or your process table fills-up with new shuttlers. The system may be able to dig a hole deep enough that it can't climb back out.

Sorry about that doom scenario.