Nope. It’s not about the changing US presidency. This is about a change of guard happening in the technology space. Hybrid Cloud is for real now. Real to the point that VMware is working closely with AWS for a seamless hybrid cloud experience.
There are a number of aspects that have changed in the last 8 years. The first thing that comes to my mind is the speed at which applications are being released. Releasing features every 2 weeks in 2008 was getting popular for the folks on the west coast. Now in 2016, it’s a common practice everywhere. The massive digitization in the last decade has increased the amount of data that needs to be processed in a given unit of time. As a result SSD flash storage is mainstream in 2016. Back in 2008, it was just getting introduced for very few use cases confined to metadata storage. At the time AWS was just about the only cloud and no enterprise was looking to leverage the cloud. In 2016, you have three major public cloud players in AWS, Azure, Oracle and enterprises are looking to leverage them proactively.
The massive digitization, the blazing fast storage systems, the fast release of apps, and the prevalence of hybrid cloud in 2016 are causing problems with traditional data management products. Here are a few examples:
- DevOps teams today want to provision 10x copies of multi TB production databases in 5 minutes. They want sensitive data masked, and data provisioning integrated via APIs in their Continuous Integration (CI) environments. Your data sub-setting, test data management tools from 2008 can’t do this.
- Back in 2008, instant recovery practically did not exist. In 2016, every product claims instant recovery. Yet they don’t have scalable instant recovery. More about this topic here.
- Data deduplication, which was a product category back in 2008, has become a checkbox in 2016.
- Back in 2008, it was ok for IT to expect storage arrays from the same vendors to be deployed in primary and DR sites for replication and DR. But in 2016, CIOs expect hybrid cloud architecture. You can’t do instant recoveries of that data in public clouds like AWS, Azure, Oracle cloud with the tools from the past.
Some data management vendors are trying very hard to position themselves as a solution for these new challenges without any meaningful investment in their product capabilities. Let me highlight this by picking a specific vendor—I won’t name them—who is privately owned, was popular in SMB space, focused on VMware only, grew as VMware grew, but are now finding it very hard to manage these new data management requirements with their old product architecture.
Let’s look at some of the key features that this vendor claims and understand its shortcomings for the new data management challenges.
- Instant recovery: VMs are turned ON off a storage pool that’s deduplicated. People allocate ordinary SATA or SAS NL storage to store their backups. Since the performance of recovered VMs will not be good, storage vMotion would have to be performed. Think about adding all the operational processes on top of these recoveries. It’s just not scalable. This blog talks in more detail about the challenges.
- Agentless Backups: Most products do agentless backups of VMware. Every DBA, worth his salt, is going to expect DB consistent backups, log backups. Most products recommend putting an agent in a VM when a VM has a database. This product claims agentless backup in spite of putting an agent at runtime job execution. It just stuns me that they still claim agentless. Speaking of “stun”, let’s talk about VM stun issue.
- VM stun issue: Typically high changes happen in those VMs, which have databases (not 100%, but mostly true). I have yet to see a customer that has not complained about a VM stun issue while using this vendor’s product in high change rate environment. In one specific service provider environment, they had a 17% daily failure rate based on VM stun issues. The only way to avoid VM stun issues is to use an agent to do native application backups such as RMAN for Oracle, VSS for MS SQL. But that brings up a different issue of how do you do incremental forever for databases, which we cover next.
- Incremental forever: Every product, which protects VMware VMs claims incremental forever. But what happens when the VMs have databases? The databases and the apps using the database are the crown jewels of any IT environment. And this vendor can’t do incremental forever backup for those databases. You have to do full backups at least once a week, thus impacting mission critical production databases with high storage & network IO, and high CPU & memory utilization.
- Recoveries for Databases: Recoveries are even more complex with this vendor. You have to mount backup images, and then restore the entire data. The larger the database, the larger is the restore time (RTO). The product does not have instant recoveries for databases.
- Deduplication: It’s my understanding that the deduplication is on a per job basis and there is no global deduplication, and that they rely on underlying OS file system deduplication to achieve global deduplication.Their dedup happens at a block size of 8 MB to 1 MB. Compare that with someone who does dedup at 4KB. You have got to see at least an extra 30% overhead for storage at primary site, storage at DR site, and bandwidth. The TCO of using this vendor, especially in 100+ VM environment has to be high. Or maybe customers end up using an external 3rd party dedup appliance, which again raises the TCO by using 2 different vendors.
- No Physical server support: Many organizations still have physical servers. Especially those mission critical apps having databases still sit on physical clusters on WIN, Linux, AIX, Solaris boxes. This vendor obviously doesn’t support physical servers. They do have a beta announcement for Linux physical servers. I don’t know if it will have incremental forever backup and instant recovery capabilities.VMs in public cloud have to be treated like physical servers, since you don’t get access to hypervisor. I am not sure if they can protect VMs that are already hosted in AWS, Azure, & Oracle cloud.
- 15 min RPO: To achieve an RPO of 15 minutes, you really need to take a snapshot every 5 minutes and hope the backup and replication to remote site happens in 10 minutes. I can’t see how a VMware admin or a DBA would agree to snap their environment every 5 or even 15 minutes. If they are confusing DB transaction log backups every 15 minutes with RPO, I find it very hilarious.
- Storage snapshot integration: Storage snapshot management using a backup product interface might have been fine in 2008. But in 2016, it just doesn’t make sense. Why? Because every CIO is asking his IT team to deliver hybrid cloud architecture. And the key of a hybrid cloud architecture is untangling data from underlying infrastructure, and allow that data to be used anywhere in private–public–hybrid cloud infrastructure. As far as I know this vendor does not have integration with a storage vendor where snapshots can be transported to public cloud for instant recoveries.
I am hoping this gives you an insight of why things that worked in 2008 may not work now and definitely not in near future. The demands and requirements of data management have changed at a logarithmic level. Does any of the following apply to you?
- Do you have 50 to 1000 VMware VMs?
- Do you have mission critical applications using Oracle, MS SQL databases on physical servers or VMs?
- Do you plan to leverage one or more of AWS, Azure, Oracle public cloud for Vaulting, on-demand DR, or on-demand test dev for DevOps teams?
- Do you have application development teams that need to accelerate their application release cycles?
- Do you need 1-Click DR for all your VMware VMs?
If it does, talk to the copy data experts at Actifio and see how our platform can help you. Please contact us at email@example.com.
Sign up for blog updates via email.Subscribe