IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New Not so fast yourself...
Redundancy IS NOT a replacement for backups.

RAID IS NOT a replacement for backups.

SNAPSHOTS ON THE SAME SAN (and yes I have used them to restore from) ARE NOT a replacement for backups

Even though I have all that... I still have a backup solution... which then also replicates to a geo-redundant storage.

Tis no excuse to "rely" on a sure thing, that has historically been shown to fail. RAID fails regularly, Redundancy fails regularly, SNAPSHOTS can and have been proven bad regularly, backups have proven to be bad, geo-redundant storage has been proven to be bad in some cases (if the backup source is bad).

If you don't test... you'll never know.

New YOU do I ADVOCATE that, doesnt mean that it happens
New Re: YOU do I ADVOCATE that, doesnt mean that it happens
So, that means lost data eventually.

Plan for worst case and implement worst scenario recovery plans.

Plus, if you don't test regularly...

Backups are like Expensive insurance. You hate to spend money on it, hate to devote time to it, hate to have to have. But when all else fails and backups are there and save all of your customer's data... You'll be glad you had it.

Sort of like going with "no-fault" only on a 2010 Mercedes Benz 500SEL. If you crash it, you are stuck fixing it yourself and slowly and at great cost. If you had full coverage, you might even get a new 2010 to replace it. Almost as if nothing happened.

First time you have catastrophic Data loss, you better make sure you've archived *ALL* of your advice and be able to prove people off-put it, other wise... its your ass.
New Saved in several places
New inexcusable
Basic risk analysis would point this potential weakness out. We won't TOUCH a datastore or database server without a well-defined maintenance window with risks and contingencies identified, and the first step after taking the database offline is a backup. The risk of making the backup is infinitesimal compared to what they're facing now.

Also, any fool who'd design a service like that without a) at least one georedundant data center, and b) a separate geo-redundant backup system should be hung from the nearest yardarm. Likewise, anyone who'd create this task without a known-good backup as part of a fleshed out contingency plan should join him.

I've seen a fair number of articles (mostly written by idiots) claiming this is yet another reason that cloud computing is teh bad. Personally, I don't think you can apply that label here because this service is apparently only slightly better equipped than the high school webgenius with a stack of servers in a back bedroom cooled with a box fan and served from wifi stolen from a neighbor.

As a company that sells services based on cloud and grid computing, that kind of reporting is what pisses me off. Yes, we're georedundant on backups and content delivery, and will be on application serving within a year. And no we don't have on the order of 11MM users, but we apparently take access and security of our client data a hell of a lot more seriously than Danger.
New Little harsh there, don't you think?
You make it sound like computer systems are notoriously unreliable, perhaps due to their inherent complexity. Do you really think the recent history of computing bears out such a pessimistic outlook?
--

Drew
New I think it was right on...
I personally feel that the entire stack of people responsible for this screw up:

The people that gave the OK to Hitachi. FIRST!

The people in charge of backups (tested backups).

The people that planned this event without properly recognizing the risks and there fore having the contingency plan all setup.

The SAN Operators that allowed this to proceed with out proper SNAPSHOT saved off to another SAN, unless they were (provably) strong-armed.

Last but not least the CTO and IT Managers, possibly even the CEO as the ultimate responsibility is his.



And Yes, Computers are KNOWN to be that un-reliable. Google doesn't even fix broken machines in data centers. They have a policy that *IF* a machine stops responding, a reboot request to the "console setup" they have. If the machine comes back, it is automatically re-imaged and made "better". If it fails to respond, it is shutdown and left to decay in place until the "rack" itself is removed.

They figure its going to cost them a minimum of $600 to send a warm body out there, find the machine, reboot it and (possibly) fix it. When the cost of a new machine for them is so low... its not even worth the time and effort to deal with failures any other way than to ignore them.
New Well played, sir. :-)
New Oh dude... did he zing me or what?
Of course, I live it. So its hard to see the sarcasm when you are so close to it.
New That should be "'hanged' from the nearest yardarm" . . .
. . hung is something else.
New rman to a different array than database
I insisted on an rman to virt but was overruled. Its gonna happen sooner or later do have georedundancy for a different application in place, those folks understood
New Test? You're supposed to test?
Few years back, when employer was using tape backup, 3000 tapes per back, I was advocating changing. Told them that even with a 99.995% good rate, there was still going to be bad tapes. Nah never happen.

Did a Disaster recovery test. Had to go back 3 or 4 weeks of tapes before we found a complete set...

Much better now, still long way from what I'd call reliable.
     rofl microsoft - (boxley) - (20)
         Check how the business press reports it - (crazy)
         Blame shifted to Hitachi - (scoenye) - (18)
             I was afriad of that.... - (folkert) - (17)
                 not so fast, with SAN connectivity getting smarter - (boxley) - (12)
                     Not so fast yourself... - (folkert) - (11)
                         YOU do I ADVOCATE that, doesnt mean that it happens -NT - (boxley) - (9)
                             Re: YOU do I ADVOCATE that, doesnt mean that it happens - (folkert) - (1)
                                 Saved in several places -NT - (boxley)
                             inexcusable - (Steve Lowe) - (6)
                                 Little harsh there, don't you think? - (drook) - (3)
                                     I think it was right on... - (folkert)
                                     Well played, sir. :-) -NT - (Another Scott) - (1)
                                         Oh dude... did he zing me or what? - (folkert)
                                 That should be "'hanged' from the nearest yardarm" . . . - (Andrew Grygus)
                                 rman to a different array than database - (daemon)
                         Test? You're supposed to test? - (jbrabeck)
                 Yup, saw it coming - (crazy) - (3)
                     I have no direct experience with Hitachi... - (scoenye) - (2)
                         EMC is as good as hitachi - (boxley)
                         I have buddy that for for HDS - (Steve Lowe)

With the mochas he was strong.
90 ms