Basic risk analysis would point this potential weakness out. We won't TOUCH a datastore or database server without a well-defined maintenance window with risks and contingencies identified, and the first step after taking the database offline is a backup. The risk of making the backup is infinitesimal compared to what they're facing now.
Also, any fool who'd design a service like that without a) at least one georedundant data center, and b) a separate geo-redundant backup system should be hung from the nearest yardarm. Likewise, anyone who'd create this task without a known-good backup as part of a fleshed out contingency plan should join him.
I've seen a fair number of articles (mostly written by idiots) claiming this is yet another reason that cloud computing is teh bad. Personally, I don't think you can apply that label here because this service is apparently only slightly better equipped than the high school webgenius with a stack of servers in a back bedroom cooled with a box fan and served from wifi stolen from a neighbor.
As a company that sells services based on cloud and grid computing, that kind of reporting is what pisses me off. Yes, we're georedundant on backups and content delivery, and will be on application serving within a year. And no we don't have on the order of 11MM users, but we apparently take access and security of our client data a hell of a lot more seriously than Danger.