Post #315,548
10/13/09 9:15:17 AM
|

not so fast, with SAN connectivity getting smarter
backups are done on the SAN with snapshots not offline storage UNLESS there is a DR requirement that requires offline storage. Many folks are resistant to that because of cost and speed of same and dont do it. Getting more common in data centers
|
Post #315,557
10/13/09 11:10:07 AM
|

Not so fast yourself...
Redundancy IS NOT a replacement for backups.
RAID IS NOT a replacement for backups.
SNAPSHOTS ON THE SAME SAN (and yes I have used them to restore from) ARE NOT a replacement for backups
Even though I have all that... I still have a backup solution... which then also replicates to a geo-redundant storage.
Tis no excuse to "rely" on a sure thing, that has historically been shown to fail. RAID fails regularly, Redundancy fails regularly, SNAPSHOTS can and have been proven bad regularly, backups have proven to be bad, geo-redundant storage has been proven to be bad in some cases (if the backup source is bad).
If you don't test... you'll never know.
|
Post #315,564
10/13/09 11:25:35 AM
|

YOU do I ADVOCATE that, doesnt mean that it happens
|
Post #315,574
10/13/09 1:29:07 PM
|

Re: YOU do I ADVOCATE that, doesnt mean that it happens
So, that means lost data eventually.
Plan for worst case and implement worst scenario recovery plans.
Plus, if you don't test regularly...
Backups are like Expensive insurance. You hate to spend money on it, hate to devote time to it, hate to have to have. But when all else fails and backups are there and save all of your customer's data... You'll be glad you had it.
Sort of like going with "no-fault" only on a 2010 Mercedes Benz 500SEL. If you crash it, you are stuck fixing it yourself and slowly and at great cost. If you had full coverage, you might even get a new 2010 to replace it. Almost as if nothing happened.
First time you have catastrophic Data loss, you better make sure you've archived *ALL* of your advice and be able to prove people off-put it, other wise... its your ass.
|
Post #315,575
10/13/09 1:31:04 PM
|

Saved in several places
|
Post #315,623
10/14/09 12:25:23 PM
|

inexcusable
Basic risk analysis would point this potential weakness out. We won't TOUCH a datastore or database server without a well-defined maintenance window with risks and contingencies identified, and the first step after taking the database offline is a backup. The risk of making the backup is infinitesimal compared to what they're facing now.
Also, any fool who'd design a service like that without a) at least one georedundant data center, and b) a separate geo-redundant backup system should be hung from the nearest yardarm. Likewise, anyone who'd create this task without a known-good backup as part of a fleshed out contingency plan should join him.
I've seen a fair number of articles (mostly written by idiots) claiming this is yet another reason that cloud computing is teh bad. Personally, I don't think you can apply that label here because this service is apparently only slightly better equipped than the high school webgenius with a stack of servers in a back bedroom cooled with a box fan and served from wifi stolen from a neighbor.
As a company that sells services based on cloud and grid computing, that kind of reporting is what pisses me off. Yes, we're georedundant on backups and content delivery, and will be on application serving within a year. And no we don't have on the order of 11MM users, but we apparently take access and security of our client data a hell of a lot more seriously than Danger.
|
Post #315,624
10/14/09 12:29:24 PM
|

Little harsh there, don't you think?
You make it sound like computer systems are notoriously unreliable, perhaps due to their inherent complexity. Do you really think the recent history of computing bears out such a pessimistic outlook?
--
Drew
|
Post #315,627
10/14/09 1:22:54 PM
|

I think it was right on...
I personally feel that the entire stack of people responsible for this screw up:
The people that gave the OK to Hitachi. FIRST!
The people in charge of backups (tested backups).
The people that planned this event without properly recognizing the risks and there fore having the contingency plan all setup.
The SAN Operators that allowed this to proceed with out proper SNAPSHOT saved off to another SAN, unless they were (provably) strong-armed.
Last but not least the CTO and IT Managers, possibly even the CEO as the ultimate responsibility is his.
And Yes, Computers are KNOWN to be that un-reliable. Google doesn't even fix broken machines in data centers. They have a policy that *IF* a machine stops responding, a reboot request to the "console setup" they have. If the machine comes back, it is automatically re-imaged and made "better". If it fails to respond, it is shutdown and left to decay in place until the "rack" itself is removed.
They figure its going to cost them a minimum of $600 to send a warm body out there, find the machine, reboot it and (possibly) fix it. When the cost of a new machine for them is so low... its not even worth the time and effort to deal with failures any other way than to ignore them.
|
Post #315,628
10/14/09 1:24:57 PM
|

Well played, sir. :-)
|
Post #315,629
10/14/09 1:28:19 PM
|

Oh dude... did he zing me or what?
Of course, I live it. So its hard to see the sarcasm when you are so close to it.
|
Post #315,632
10/14/09 1:53:11 PM
|

That should be "'hanged' from the nearest yardarm" . . .
. . hung is something else.
|
Post #315,634
10/14/09 2:26:33 PM
|

rman to a different array than database
I insisted on an rman to virt but was overruled. Its gonna happen sooner or later do have georedundancy for a different application in place, those folks understood
|
Post #315,576
10/13/09 1:58:05 PM
|

Test? You're supposed to test?
Few years back, when employer was using tape backup, 3000 tapes per back, I was advocating changing. Told them that even with a 99.995% good rate, there was still going to be bad tapes. Nah never happen.
Did a Disaster recovery test. Had to go back 3 or 4 weeks of tapes before we found a complete set...
Much better now, still long way from what I'd call reliable.
|