There are a variety of ways to make sure you have backup redundancy. Independent backup expert Brien Posey explains in this tip.
It has often been said that the best way to ensure data protection is to have three copies of the data — the original data, an on-site backup and an off-site backup. Thankfully, creating redundant backup copies is not nearly as challenging as it once was. In fact, there are numerous ways to create a backup and then make a separate copy to store off-site.
Another way to achieve backup redundancy is through disk-to-disk-to-disk backups. There are a number of variations of this technique, but the general idea is that data is written to a disk-based backup target on-site and then replicated to a secondary disk-based array off-site. This secondary array might be located in a remote data center, or the organization might opt to use cloud-based storage instead. The use of cloud-based storage in this way is sometimes referred to as disk-to-disk-to-cloud.
Disk-to-disk-to-disk backups have a number of advantages such as reliability and automation. Unlike tape-based backup products, there aren’t typically any manual processes that have to be completed on a daily basis. Of course, this reliability and automation comes at a price, and that is the biggest disadvantage to disk-to-disk-to-disk backups.
Unless cloud storage is being used as a remote backup target, disk-to-disk-to-disk backups require two separate backup storage arrays. Another cost that must be considered is bandwidth. The organization must have a sufficient amount of WAN bandwidth to be able to replicate the backup data between data centers. Block-level deduplication can decrease the amount of data that has to be sent across the wire, but there may still be a considerable amount of data transmitted each day.
As previously mentioned, there are a number of variations to the concept of disk-to-disk-to-disk backups. Larger organizations tend to use storage-level replication as a part of their disk-to-disk-to-disk backups. For smaller organizations, it is often more cost-effective to perform virtual machine (VM) level replication because doing so does not generally require any specialized hardware. Windows Server 2012 R2 Hyper-V, for example, offers an extended replication feature that allows VMs to be replicated to two separate targets, one on-site and one off-site.
This type of replication is appealing to smaller organizations because it is easy to configure and it allows for the use of commodity servers and storage hardware. The disadvantage, however, is that VM-level replication does not scale well since each VM must be configured for replication individually.
A somewhat newer approach to redundancy is copy data management. Copy data management is a term that was first coined by Actifio. It refers to a technique that is designed to reduce the number of copies of data that are stored within the organization. The Actifio approach allows for two copies of a piece of data — a primary copy and a backup. Although other copies of the data might be needed throughout the organization, a snapshotting mechanism is used to provide virtual copies of the data on an as-needed basis.
From a data protection standpoint, the main benefit to this technology is that it protects against accidental data modification. Although the software does allow for write operations, writes are directed to a differencing disk rather than to the primary copy of the data, thereby maintaining the integrity of the original data.
There are a number of different products available for achieving backup redundancy. Each of these products has its own set of advantages and disadvantages, especially with regard to cost and complexity. Organizations must choose a product that fits their current needs, while also considering whether or not their chosen product will continue to meet their needs in the future.