By Brian Reagan, VP Product Strategy & Business Development – Disaster Recovery as an IT concept has been around since the 1970s, when computers started to become more vital part of business operations. In those days, mainframes dominated the raised floors of data centers and the cost of the physical infrastructure ruled out the prospect of maintaining a secondary, standby system except for the largest of companies. Of course, these systems were not operating 24×7 and were batch-processing oriented, so downtime measured in days was not catastrophic to the business.
Ah, how the times have changed.
The uptime and recoverability of the systems in data centers is now non-negotiable, so time, people, and money are dedicated to maintaining both. Most analysts estimate that between three and five percent of a data center budget is spent on DR. The irony is that, in spite of the increased spending, many CIOs lack confidence in their overall IT resilience. I met with several CIOs recently and asked “how confident are you that you can ensure zero data loss in any one of your data centers?”. Consistently, their pained yet honest answer of “not very” was the reply. Similarly, when we discussed DR, they did not feel like their testing was frequent or complete enough to truly declare themselves disaster ready.
It seems that confidence, when it comes to IT resilience, is scarce.
The reality is that using traditional tools most often found in the DR value chain – legacy backup systems, de-dupe appliances, tape media, replication software — there is no way to trulyguarantee recovery. These tools tell you when they’ve made a backup copy of data. They tell you that a write was made to a remote disk. They tell you that your system is available. They do not tell you if you’re 100% able to run your production system in the event of a disruption.
Legacy tools are not capable of delivering an assurance of true resiliency. A de-duplication appliance is a wonderful invention for saving space, but you can’t run your production application on it. Tape is a wonderful, low-cost medium particularly for long-term retention. But, it takes days to restore a data set of any size, particularly when recalled from a distant bunker. Legacy tools just don’t allow an ability to test production failover, without disrupting production operations. In the event of an actual failover, traditional tools do not have the ability to capture changes and automatically those back to production, once its back online.
In the immortal words of Arthur C. Clarke, “any sufficiently advanced technology is indistinguishable from magic”. To that end, I’d encourage you to check out Actifio’s magic act when it comes to both automated failover testing, and failback / syncback.
It’s time to rebuild confidence in data protection, disaster recovery, and ultimately, business resilience.