
In my previous blog, I talked about how Operations and App Dev/Analytics teams struggle with the need for multiple copies of production data (the “copy data” problem). These teams turn to cloud solutions like Google Cloud Platform (GCP) to help address some of these challenges with features like flexibility, on-demand storage/compute, unlimited scalability, etc. However, once they’ve adopted GCP (or any cloud vendor), they may find that not all of their challenges have been solved – and some new ones may pop up.
Let’s start with organizations who are using GCP as a target for copies of production data, but still maintain the production application/database in their on-premises data center. The challenges that these organizations face mainly fall on the Operations side of the house:
- The deployment of on-premises backup infrastructure and solution is cumbersome. Even though copy data lives in the cloud, there is still a need for on-premises resources to procure, provision, deploy, support, expand, and upgrade the infrastructure associated with backup. This results in complexity, high burden, and high TCO (OPEX).
- An expensive on-premises backup infrastructure/solution leads to high TCO (CAPEX). This on-premises backup solution requires local storage, which typically consists of expensive dedupe appliances or high-end storage. Additionally, most traditional backup solutions require weekly full backups, which increases RPO.
- On the recovery side of the house, these organizations face high RTO. Typically, the copy of the data living in GCP will either be written in a proprietary format or it will be deduped (or both). Since the copy data is not stored in its native format, there is a translation or rehydration required during the recovery process which can significantly extend the recovery time, extending outages and affecting business continuity.
Now, what about organizations who are running production workloads in GCP? A few of the challenges listed above regarding on-premises backup infrastructure is eliminated, like high TCO (both CAPEX and OPEX). But as you can imagine, there are still some challenges:
- It’s very complex to protect hundreds, nevermind thousands, of GCP VMs. GCP doesn’t have a global SLA engine to manage backups, and there is no policy-based management using cloud snapshots.
- There is no application aware capability for capturing mission critical databases like MS SQL and SAP HANA. The lack of application consistency with cloud snapshots means that another approach should be used. However, as we mentioned above, traditional backup solutions require a recurring full backup, which impacts production DB performance when running in GCP.
- There is no disaster recovery with cloud snapshots, regardless of whether you try to recover in the same region or an alternate region. They’re just a copy of the data sitting somewhere, waiting to be used. But the lack of DR capabilities means that once again, a backup solution that runs in the cloud will be required.
Despite these challenges, fear not… all is not lost!
The next blog in our series will discuss how to solve these challenges and how Operations and App Dev/Analytics teams can really take advantage of the power of GCP. If you want to get more insight, check out our webinar which discusses the optimal solution – and more!
If you really want to jump ahead, check out this video about Actifio GO for GCP, a SaaS offering in GCP that helps both operations and app dev / analytics teams achieve greatness!
Or, visit the Actifio GO for GCP page in the GCP Marketplace!