This is oh so timely...
We're about to start looking at ways of making a database-backed application somehow be distributed. We are currently using MySQL which works quite well so long as one database is all you want. Leveraging MySQL's replication gains us only part of the solution. However, it is not possible for all application instances to write to one master database and read from a local, replicated version. Why? Because connectivity between the databases cannot be guaranteed. Or, put another way, the internet connections between the databases are dial-up modems with changable IP addresses. If we use replication at the database level, it has be automatic and two-way. I'm told Oracle can do this. Currently, that is out of our price range, certainly for experiments, anyway.
So. There was an announcement just a today about PostGresQL now supporting this. It's called eRServer. Has anyone played with this? I'm having trouble finding a list of features on the web.
This, of course, leads on to my main question: has anyone migrated a large database (>150 tables) from MySql to PostGreSQL?
I began exploring what would be required in our application a week ago. Apart from a quite differnt way of managing the database, the first hurdle I found was that PostGreSQL doesn't support ENUM the way MySQL does. Googling recently found [link|http://www.sitepoint.com/article/542/2?SID=c31dc31e696c94aa7e2b1d097e581b91|at least one] useful link for converting - that one is useful mostly because our app is in PHP. There are still some questions about SQL compatibility and column formats, of course. Unfortunately, I can't devote any work time to this project unless the results are going to be very promising. This is not an arbitrary management decision: we simply have too much other stuff that needs to be done to spend time on this without any guarantee of fairly ealry gain.
Thanks for ideas and comments.
Wade.