
While I’m not in sales, I am lucky enough to talk to enterprise IT organizations a lot in my travels – about 10 meetings or calls a week. Recently, I have been discovering more environments than ever that have 50 or 100 TB Oracle databases, 10-15-20TB Microsoft SQL Server instances, i.e. many VLDBs (a.k.a. very large databases, which has to be one of the funniest acronyms in tech after JBOD). There are some common emotional patterns across the organizations who have such environments.
The DBAs and backup admins are not happy with their current data protection tools. Why? Recovering such large databases is a nerve wracking (even job-threatening) experience. Even getting the backups done is often a serious challenge.
The IT Directors are not happy with so much money, time, and nervous energy spent just protecting such large databases.
And the application teams. Ha! They’ve learned not to ask for any backup copies. They might as well ask for the punch cards from the ‘70s mainframe.
Why are people losing sleep over managing backups? A full backup of a 100TB database fully saturating a 10 Gbps network, would take around 24 hours at ideal conditions. This is assuming the production storage, the target backup storage, the media servers are all able to handle such high throughput without any glitches, and you have no other network workloads. Thus full backups are done over weekends. If you’re an IT admin and the backup has one small glitch during that period, you’re getting paged while at the park with your kids, or having a drink in the middle of your favorite football game, or worse, you get a call at 2 a.m.
The problems don’t stop there. When you do weekend full + weekday incremental backups, what about recoveries? You not only have to recover a full 100TB database, but also apply incremental restores. These restores would most likely come off a tape library or a deduplication disk system, which means extremely slow restore speeds, probably 48 to 72 hours. The probability of a successful restore is as good as Johnny Manziel returning to the Cleveland Browns and winning a Superbowl in 2017. The anxiety levels are high enough even while recovering a 1TB database. It’s a nerve wracking situation waiting for a DR test to be finished in 3 days.
To pile on to the challenges for DBAs and Backup Admins, CIOs want to leverage clouds like AWS, Azure, or Oracle. That might as well be the last straw for many storage architects because if you were thinking of using storage array snapshots and replication to a DR site, those proprietary replication technologies won’t work in public cloud.
So how do you solve these problems? It’s actually not that difficult. You just need to leave the current track of thinking and join a different track. In your current track, you are thinking of tools and approaches from the past – backup products that do repeating full, or storage replication between same vendor models. These won’t work for large databases or in the public cloud, or with disparate storage systems.
Consider the new track to solve the problem in a different, simple way, and refreshingly new way that allows you to tackle database protection and recovery anywhere – in your data center or in the cloud that your CIO wants; using any storage from any vendor; and even for databases beyond Oracle, SQL or DB2.
Just as VMware virtualized physical servers and improved your productivity and reduced data center footprint, this approach lets you virtualize the data on any storage, any cloud and enables you to do incremental-forever backups, and execute instant recoveries.
Imagine never getting paged on a weekend again for backup failures or backup window violations. Imagine recovering a 100 TB database in minutes – not days; while you’re brewing and finish a coffee.
Interested? Check out this white paper, which digs into the challenges of protecting and recovering very large databases, and how those problems can be solved with “copy data virtualization”.
Whitepaper – Managing Very large Databases