We’re witnessing a very large digital transformation that’s happening in industry. Everybody wants to leverage the cloud to make their enterprise data centers run like the cloud. And one of the fundamental reasons to do this is the scale at which the data is evolving.
Managing Very Large Databases
Customers are creating thousands of virtual machines, thousands of databases a day, a month. Now how do you manage this at scale? Nobody is clicking at a UI to say, “I started 800 databases I don’t have 800 people clicking a UI.” How do I make it effective so that when something is stood up the entity that is stood up automatically knows how it should manage itself i.e.
- What should my backup policy be?
- How often should I access it?
- Who should have access to it?
- Where should the data live?
- What’s the compliance?
- How do you govern this data?
All of these become elements that govern the lifecycle of the data that was just created.
Having done this at scale at many enterprises both on premises and in the cloud, we have a very simple thought process of how an enterprise should approach this problem. So right here you have a typical enterprise deployment. Every enterprise will have a customer portal. A portal is an entity where an employee says, “I want to stand up a HANA database” or “I want a virtual machine with four cores of memory and 18 gigabytes of memory.” That request comes in as a service request. Along with that request are number of tags:
- Which department do I belong to?
- Where does this chargeback go into?
- Is this compliance data?
- Is there a security involved?
All those become tenets that come in as part of the service request. Once a service request comes in, it typically goes into an enterprise workflow engine. Now this could be built based on terraform templates, it could be based on VRO and any type of orchestration that the enterprise is chosen as a stand.
And as we all know there are approvals and audits, right. Once it goes into your enterprise you have a specific set of procedures and processes to follow to say, “Okay, I need customer X or an employee X is asked for this piece of data, I’m going to give the employee X access to that data,” and then seamlessly in the backend you need some entity to manage all of that data. You call into Actifio’s REST API.
Actifio provides a single unified pane of glass across all disparate enterprise data segments and it can be driven completely with our REST API. A lot of our enterprises are leveraged this capability. It can customize your workflows, generate your service now tickets, interpret into Mavens and Jenkins so anything that you do be it in the IT part of the organization or the Dev part of the organization can be integrated into Actifio. All your approvals and audits eventually end up as a workflow engine running inside Actifio.
Some parts of the workflow will say, “I need to backup this virtual machine.” At that point this arrow will say, “Actifio please backup my virtual machine.”
Another request could be to access data; “My developers want to create 10 copies of this particular test data so that I can run performance testing, QA testing on it.” Again, comes in as a service request into the portal the same tenets like we talked about:
- How do I?
- Who do I charge this to?
- How long can I keep this infrastructure running for it?
- Who controls it?
- Does it need to be masked?
All information is passed in and again, Actifio’s powerful REST API comes into play where it takes all of these elements and presents the elements of the data that was requested in the security context that was requested.
This makes enterprise deployment pretty seamless because I can have 8000 of these entities comes up. 10,000 of these entities comes up, all coming up in disparate geographic locations. Some could be in the cloud. Some can be on premises. You can have the data anywhere and the single Actifio API and the REST API in the Actifio engine tackle the data needs globally.
The other fundamental aspect of data governance is monitoring.
- How do I know my systems are alive?
- How do I know my Actifio appliances are alive?
- How do I know I’m responding to alerts?
- How do I run a managed service on top of this?
There is a whole slew of monitoring tools that enterprises have adopted. Now that could be service test, or it could be data based, could be SalesForce based, any CRM tool that integrate within.
Now how do you integrate that seamlessly with the engine as well? Again, these CRM tools and the monitoring tools call into Actifio using the REST APIs to get job status, to get audit status, to see if my data has been replicated, has been on vaulated, has been encrypted. Actifio gives you very detail status of where your data lives at any given point in time. So putting this altogether what has happened is nobody outside of the developer IT and the DevOps people know that there is an Actifio system. Your employees are using the same tools that they’ve used. They use the same ticketing process, the same board flow process so there is no retraining of your employees.
What enterprises have done is suddenly modernize their underlying data infrastructure. They have a single virtual copy of the data that’s delivered seamlessly and instantly. They now have the flexibility to leverage the cloud. They have the flexibility to leverage analytics elsewhere. And all of the powerful capabilities the Actifio platform offers. We’ve seen this deployed at scale at large financial organizations, large retail organizations and many healthcare organizations as well.
Ashok Ramu manages Actifio’s Cloud Business and works with some of the world’s largest enterprises to drive their data transformation initiative and improve their data management strategy.
Sign up for blog updates via email.Subscribe