Jay and Ashok discuss Actifio, the SAP Certified data protection solution for SAP HANA
Jay Livens: Hi, this is Jay Livens. I’m Senior Director of Marketing here at Actifio and I am here with our GM of Cloud Solutions, Ashok Ramu. Ashok, thanks for coming.
Ashok Ramu: Hi, Jay. Absolutely, happy to be here today.
Jay: Today what we’re talking about is SAP HANA in Actifio. So Ashok, why don’t you start by telling us a little about how Actifio works?
Ashok: Sure thing. You know, as you can see in this picture here, Actifio started out with the concept of protecting application consistent datasets, right. And our philosophy is you capture the data ones and then you use it any number of times, and this stands true across the application space we work with. So we work with your traditional database applications like SQL and Oracle; we work with any flavours of Windows and Linux; and we also work with your esoteric Unix is like AIX, HP-UX machines. And then the questions you talked about Jay, we work very closely with SAP and integrate with all types of SAP databases.
So for instance we work with SAP on Oracle, SAP on Sybase, SAP on HANA, and many other database environments. So the fundamental philosophy as we talk about is Actifio captures any and all of these applications in a native format and an application consistent native format. And we use an application native tool to capture it. So we use a VSS snapshot and capture SQL using that capability, we use RMAN to capture Oracle, and then we use some of our patented home grown capabilities to capture application consistent SAP data from the variety of databases as well which we’ll go into detail.
Jay: That sounds really interesting. What about recovery? How do we recover the data or bring it back if we wanted to access it or use it really?
Ashok: When you talk about data recovery there are two aspects for recovering any of these datasets: there is data and there is metadata. What Actifio does is, because we go application down, we’re very aware of the application metadata as well as the data. So when we present the data back and as listed here we can present the data back in any one of these environments, right. We can go right from AWS, Google, Azure, IBM, VMware cloud and also back to your dataset. So you could capture from one location and recover in another location. And the way the recovery works is because we are applications aware; we not only bring up the database, we tie the database to the database binaries and bring up your entire application stack on the recovery side.
Jay: Nice. And then so how about next level down. How about with SAP HANA specifically? How does that works? That’s I think an in-memory database, right, maybe a little different.
Ashok: SAP comes in various configurations. People can run SAP on Oracle, SAP on Sybase and SAP on HANA. HANA like you said is an in-memory database. So typically when you look at SAP environments they are running on Linux machines, right. And typical Linux machines from a data protection standpoint will require LVM which is the Logical Volume Manager to basically run the database and they typically will have two partitions: one partition for data and one partition for logs. So what Actifio has done is basically recapture data from a consistent format from SAP.
We leverage LVM snapshots fundamentally to capture the data in a consistent format. And once we get the snapshot we have our own tracking mechanism to tell us what blocks change in those snapshots. And with the combination of these two variable to efficiently capture data and present it in the same format we do for other applications. In fact we are the only technology in the industry to do this today. Irrespective of your data volume size, irrespective of the type of dataset that you have underneath the SAP, the capture and the use mechanism works the same way.
Jay: I know that SAP has some tools BR tools I think is one of them that sometimes people use to protect SAP environments or Hana environments. Does Actifio use BR tools?
Ashok: The BR tools was considered the old style of backing up SAP. This is the traditional backup mode where you dump the database as a full for the first time incremental forever, and then recover it using the old fashioned mechanism. There are some goods about it. It is known to work and it has worked for many years. But the bads about it is as the data volume grows and lot of SAP deployments are in multi-terabytes the recovery times are just unfathomably they’re really large.
So if you want a very low RTO and a very low RPO which is Recovery Time Objective and Recovery Point Objective, the BR tools are not able to deliver that capability, which is why we’ve resorted to the capability of you know if it’s an Oracle database Oracle has native change block tracking which we use. If it’s not an Oracle database and a database like Sybase or HANA then we have our own capability to do the change block tracking which will give you the same optimal recovery times.
Jay: Now is any of this different if we’re using a physical environment for SAP HANA as opposed to maybe a virtual environment?
Ashok: Absolutely not. So Actifio works one way irrespective of where your application lives, right. As long as SAP runs on Linux that Linux could be a physical Red Hat server, physical Debian server, physical SUSE server, it could be running in Google, it could be running in Azure, it works the same way. Actifio kind of neutralizes the infrastructure with infrastructure agnostic. So we basically go application down and SAP is SAP no matter where you run. So that’s what makes the method very easy to use, and so customers looking at cloud or trying to migrate data out of the cloud, are able to leverage this technology to protect it the same way on the source and recover it on the target and use it in the same way. So that technology works that way.
Jay: When I do recovery and I bring it back, how do I bring that SAP environment back into production? Are there any challenges that I might face trying to restore SAP HANA and then bring it back into production?
Ashok: When you look at recovering SAP, there are three aspects you need to consider, right. First is you have to basically — SAP is a set of binaries, you need to bring up the binaries. You need to bring up the operating system that’s housing the binaries, and you might have custom patches in the operating system that you want to always keep up-to-date. And finally you want to bring up the database and the database binaries, right. So if you take, these three pieces have to be captured distinctly and the person or the application that’s capturing it should be aware of the fact that these three pieces have to be stitched together for the system to come back up.
So what Actifio does is Actifio does an efficient capture of the operating system. We capture systems date, so we can capture the boot OS and all of the other elements that make up the Linux machine. We stand that machine up in any of these environments. Once that machine is stood up then we are able to present to it logically the SAP and the database binaries. So now this machine looks like the shell that was running SAP on your source. Finally, what we do is we now to the shell we present the SAP data and the associated logs, now your environment is up and running.
Jay: I hear things like boot volumes and stuff like that; it makes me think of bare metal. Can you do bare metal recoveries with this kind of approach?
Ashok: So yes, so we can capture the bare metal component. So we can capture the boot volume, the OS partitions and all the other data partitions inside that machine. And when we bring it up in the cloud environment in an AWS or Google, they’re able to convert those bare metal or those boot volumes to the format appropriate to the hypervisor. So that machine is stood up and then the data is presented to the environment.
Jay: What about transaction logs, what about the ability to use those and protect them and perhaps use them to roll backward or forward as needed?
Ashok: So you know, like I said we have two components we capture: the data and logs right. And the transaction logs is what we use to capture at a much finer interval. So if you want let’s say a recovery point objective, which means your data cannot be stale for more than 15 minutes, what we typically do is we capture the data itself once every six hours let’s say, and the transaction logs are captured every 15 minutes. So what you get is because Actifio is aware of the database data and the logs, we’re able to marry the two on the recovery side and get you up to a 15-minute recovery. So that’s how we achieve really low RPO with SAP.
Jay: What happens if I have my SAP database and if I experience some corruption and I want to bring back maybe certain tables or fields not the entire database, can I do that too?
Ashok: So as part of recovering the database and as part of the recovery operation which is the used component of the Actifio story, we have built-in what’s called a data transformation piece. So tied into the data transformation piece is the ability for the end user to dictate what the data looks like downstream. So you can capture a production database and it’s a very common occurrence. So you mentioned corruption but you know the most common occurrence is sensitive data. What is their Social Security numbers, credit card informations, how do I present that to my downstream environments.
We have a workflow built into Actifio where you can take production data and you can basically take away the sensitive pieces, mask it, and then present the masked copy to the downstream so that the data has been sanitized and developers [Indiscernible] [00:09:41]. The same concept can be applied if you want to do subsetting. So if you want to take let’s say an Oracle has you know 100 tablespaces and you just want to present two tablespaces to the downstream environments, you can absolutely bring up the Oracle database, subset it, and present it to the downstream environment. So the transformation engine is powerful. We’ll let you give you extensive scripting capabilities that you can provide because you know your data, right. Actifio will make sure the scripts are called at the appropriate time and will orchestrate the whole thing on a schedule.
Jay: And you imagine calling the right scripts and doing the right quiescing. So if I’m an administrator in SAP HANA environment and I’m used to using the GUI inside of HANA and I want to make sure that my backups are being complete. Are those backups that Actifio completes or those reporting are somehow available or is HANA aware of it inside the GUI?
Ashok: Yes, the HANA is aware of the HANA snapshots that we take. So specific to SAP HANA the SAP HANA UI will show HANA snapshot that Actifio has triggered. So you can track it in the UI to say the backups happened at a given point in time. So that’s tracked there.
Jay: As I’m doing these various backups I’m thinking about the importance of HANA performance because it’s a very critical application. So as I’m running these backups possibly is frequents at 15 minutes, what does the overhead look like that for my you know, SAP, HANA server for my compute, for my network, all that stuff for my environment, is it going to be a big impact on my production?
Ashok: I think that’s a great question. You know, what we’ve architected, we’ve architected a solution wherein we basically for any type of application including SAP Hana the impact to production is always minimal with Actifio. And we ensure this with two things, right. We are an incremental forever product which means you only do the entire dataset once for the lifecycle of the application; and then we’re much more efficient in capturing changes. So we have filter drivers dropped in for SQL as well as Linux environments, which track what blocks in the volume changed.
So we take a snapshot and we only move the changed blocks, right. And logs, because we move logs so often and we’re able to marry the logs with the database, you can schedule your database backups as far apart as you want and still have a 15-minute log interval. So overall you know, depending on the transactional volume of your database and how many changes occur, you can schedule a database backup maybe just once a day and play out the logs every 15 minutes. So we let you manage your data as often as you want because you best know the application and Actifio can tune and cater to that very easily.
Jay: Imagine the data backups might be larger perhaps in the logs that may not be fair, but when you’re running one of those copy operations it’s going over the network to the Actifio clients and illustrated here, what happens if the network goes down, what is going to happen to our backup job?
Ashok: There is enough resiliency built between the database layer as well as the Actifio engine that does enough retries and protocol handshake happens. In the event there is a catastrophic network failure with the job will be retried after a given point in time. But because we are tracking changes the whole time, every time you do this you’re only incurring, you’re only moving what changed, right. So there is no additional data movements, so the overhead is minimal. So if you compare the Actifio solution to other solutions where network retry is pretty common these days, most products do that, Actifio is optimized to tell you for this application this is the smallest amount of data you can ever move to get to a consistent state.
Jay: Let me switch gears a little bit. A few weeks ago we had a client in here who was a big SAP shop. And I was asking him about what he was using SAP for and the amount of copies and he started reeling off well, I have one for Dev, I have one for training, I have one for user acceptance testing, I have one for other testing. And all of a sudden that one database had like 10 copies. And so thinking about this in the context of that, how does Actifio help that and let me add one more element making more complex for you, you know, when you’re getting the database to the different users you might want the database to look different because they may not want to have access to social security numbers or other personal information that could be very private. So how can this solution or how can Actifio help with that?
Ashok: Great question. So we touched on the data masking capabilities and the workflow we tie into. So that’s one mechanism wherein Actifio can take the production dataset and by obfuscating or taking your data masking scripts that you give us to basically mask and erase and remove all of the sensitive information, present that copy to the downstream environment. Now what we can do is we can present that same data to multiple downstream environments. So what you can do is basically capture the Actifio SAP database from here and then turnaround and present that to multiple cloud platforms, right.
Actifio can then present that data to a compute in AWS, to a compute in Google, to a compute in Azure and compute in IBM. They’re all four different independent read-writable copies and each end user sitting on Amazon, Google, Azure or IBM can snapshot it or can manage the lifecycle of each copy on their own. So this lets you control this and you can also orchestrate and say how often you want to keep these copies refreshed.
Jay: I’m the developer or the training guru, what is the process for me to get a new one of these? Do I have to sort of call and do the process or I have to wait for many weeks as the admins have to go through their traditional processes to do this masking or whatever is needed. What does that look like for me?
Ashok: Absolutely not, right. So we’ve built it so that this entire process can be end user driven. So the operational control can be held by the ops team that says, okay, Jay has access to the data be it on AWS, Jay has access to the data on IBM, etcetera. And based on the access rules set the end user can go in and provision their own copy of the database. Actifio incurs — you incur no additional storage then the copy is provisioned. The only time you incur storage charges is when basically you start writing to that database and start making changes.
And the one last piece I wanted to bring up is essentially you know a question that’s come up to me from a lot of customers is, what is the impact in terms of you know what do you – how do you get an application consistent snapshot, right. Because that requires not only knowledge of the storage which is at the LVM level, it also it requires you to tie into the database layer and the SAP layer. So Actifio now has tie-ins with all of these layers. So when you take a backup we’re able to orchestrate across the different layers and basically put the database in a consistent state, take a snapshot, and then get the database going again. All of this happens in seconds so that your production is not impacted at all. So that’s the mechanism how we’ve built it to capture HANA efficiently and also recover it in any environment.
Jay: Great. Well Ashok, thank you.
Jay: Everyone, thanks for attending this brief video learning about how Actifio supports and protects SAP HANA. If you’re looking for more information please visit our website at atactifio.com or follow us on Twitter at Actifio. Thanks again.
Ashok: Thank you.
Learn more about Actifio for SAP HANA