
Data is more critical today than ever before. Recent statistics suggest that 90% of the data in the world has been created in the last two years, and as we look forward, new technologies like Internet of Things will accelerate data growth. As our dependency on data increases, our need for continuous data access and consistent data protection becomes more critical than ever.
The mainstream backup applications in use today were developed in the late 80’s when hard drive sizes were measured in 100’s of megabytes and Microsoft was shipping Windows 2.11. As the world has evolved, these applications have bolted-on new features to adapt to exploding data volumes, but the underlying legacy architectures were never designed to deal with the challenges we face today.
Let’s look at four ways that backup software is not cutting it.
Backup Windows
Historically backup software used an approach of incremental and full backups. An incremental backup copies the changes from the last backup (full or incremental) so the amount of data transferred is typically very small which results in very fast backups. A full backup, in contrast, is a complete copy of the production data, and so is typically very large and can become unmanageable as information grows.
In order to minimize backup windows, customers prefer to rely upon incremental backups. However, this has serious implications on recovery times. Full backups take much longer to complete, and result in fast recoveries. We need something better.
Recovery Times
In the past, people focused primarily on minimizing backup windows, but the reality is that recovery is what matters. Does it even matter if you can backup your data daily, but can never recover it? The answer is clearly no, and so a key element of any protection solution is how fast you can recover your data.
The recovery challenge goes back to the incremental/full backup model. Incremental backups are small, but recovery will require the team to first recover the previous full backup and then every subsequent incremental. This process can be time consuming and can create risk since a failure on any incremental will result in a total recovery failure. Recovering from a full backup is much simpler since you can recover the entire image from one backup.
Customers are forced to trade off fast backup times and slow recoveries with incremental versus slow backups and faster recoveries with full backups. This trade-off is unsustainable in today’s data-centric world. You need a solution that enables fast backup and fast recovery.
Complex DR Processes
As IT has become more critical to business operations, the challenge of disaster recovery has become more important than ever. A single outage can result in significant revenue and reputational impact, and so companies must be prepared for outages of all types, including unexpected disasters.
The challenge with traditional backup is that large scale recoveries can take days or even weeks, and as a result, recovering from a disaster can be extremely problematic. Adding disk as a backup target can provide incremental improvements in recovery times, but the core challenge of lengthy recoveries remains. It is for this reason that many companies are unable to test disaster recovery plans because the time and effort required is more than they have available. This is a scary situation because without a test, you cannot be sure if your DR plan will actually work.
We need faster recovery methods that allow for instant data recoveries locally and in the cloud. These methods should allow data to be presented instantly and run directly from the protection environment without the need of a lengthy restore process.
Inability to Leverage Protected Data
Today’s IT infrastructures are constantly evolving, and in order to ensure reliability and consistency, companies must thoroughly test all proposed changes or upgrades prior to being rolled into production. In practice, this requires large lab environments where companies must invest large sums of money to create, store, mount and access copies of production data. This capacity is in addition to any existing investments in production or protection and so represents significant inefficiencies.
In an ideal world, we need a solution that allows for instant read/write enabled data recoveries for testing purposes. By instantly mounting a copy of production data, lab storage requirements can be significantly reduced and tests can be performed on the latest production copies. This will simplify lab environments and improve testing quality.
In summary, IT has evolved over the last three decades and data protection has struggled to keep up. Traditional backup models do not provide the flexibility, performance and scale that today’s businesses demand. We need to rethink data protection to ensure that it aligns with modern business requirements and that it delivers the availability requirements that we need.