Actifio is now part of Google Cloud. Read the full announcement.

Backup & Recovery for SAP HANA.


Enable powerful enterprise- grade multi-cloud backup and disaster recovery for mission critical SAP HANA environments.

Enterprises use Actifio software for rapid backup and instant recovery of mission-critical SAP HANA in-memory databases, anywhere – on-premises and in the public cloud.


Enterprises using legacy backup architecture commonly face the following challenges:

HOW actifio HELPS

Actifio reduces the performance impact and backup window by up to 20x by leveraging application consistent incremental forever backup technology.


It reduces RPO to 1 hour and reduces RTO from days to minutes with its unique capability to instantly recover multi-TB SAP HANA databases.


Actifio’s ability to store backups in native SAP HANA format on any storage also delivers high-performance post recovery and cloning.


Best of all, enterprises can use Actifio to leverage any private/public/hybrid cloud.



Actifio helps organizations to outperform their peers by becoming truly data driven.

Frequently Asked Questions.

Since Actifio uses the native SAP HANA snapshot API, certification is not required.

SAP can be deployed in the following supported configurations:

  1. Single host configuration (scale-up)
    1. One instance and single container – supported by Actifio
    2. Multiple instances and single container – supported by Actifio
  2. Multi node cluster configuration (scale-out)
    1. Multiple host one instance and single container – supported by Actifio
    2. Multiple instances – supported by Actifio (via filesystem backup)
    3. Multiple containers – supported by Actifio (via filesystem backup)
  1. SAP HANA database version: SAP HANA SPS 10, SAP HANA SPS 11, SAP HANA SPS 12
  2. SAP Supported RHEL/SLES version (Intel-Based Hardware Platforms)
    1. RHEL 7.2 (SAP Note: 2013638, 2136965, 2247020, 2292690)
    2. SLES 11, SLES 12 (SAP Note: 1824819, 1954788, 2240716, 2205917)
    3. SUSE 10, 11, 12 (SP2, SP3)
  3. Actifio Linux Change Block Tracking (CBT) technology requires Linux LVM

Actifio’s pricing model is very simple and easy to understand. It’s based on the amount of source data protected. For example: If an enterprise wants to use Actifio to protect 5 SAP HANA instances, each with 10 TB source database size, the enterprise would need Actifio license for 50 TB of source data. Enterprises can use this license anywhere in private, public or hybrid cloud.

Yes. Actifio supports all the major public cloud providers, including  AWS, Azure, Google, IBM, and Oracle. It’s also available in the leading public cloud marketplaces.

Following are the key comparisons:

  1. Full vs Incremental forever backup for SAP HANA databases: Legacy vendors perform recurring full backups while Actifio performs app consistent incremental forever backups using native SAP HANA snapshot APIs.
  2. Large vs Instant recovery time: RTO with legacy vendors is large because of restoring from their proprietary backup format. Compare this with Actifio’s instant recovery where it can recover even a 100 TB SAP HANA database in just minutes.

Following are the key comparisons:

  1. Low vs High Performance post instant recovery: While mounting backups from deduplication appliances might be instant, the deduplication format on commodity storage impacts the IO performance significantly. Users just can’t run mission-critical SAP HANA databases from such appliances. Compare this with Actifio which stores backups in its native SAP HANA format on any storage tier, thus delivering the raw native performance of the underlying storage.
  2. None vs High flexibility for storage and storage protocols: With deduplication, appliance vendors users are locked into whatever storage the vendors supply. Moreover, the only storage protocol supported is typically NFS. Compare this with Actifio which allows users to specify any storage vendor with the right performance metrics, thus eliminating storage vendor lock-in. Moreover, users have complete flexibility to backup, recover, and mount over Fibre Channel, iSCSI, and NFS protocols.
  3. High vs Low Cloud TCO: The scaleout deduplication appliance vendors support very few public cloud providers. They also need a very large amount of compute to protect cloud resident SAP HANA workloads with a minimum of 3-4 node clusters, thus driving up the cloud costs. Compare that with Actifio which needs very small compute to protect SAP HANA workloads, leverages cloud block and/or object storage and even delivers instant recovery from cloud object storage.

Yes, Actifio leverages SAP HANA’s native Block Change Tracking (BCT) technology to capture just the changed blocks in its incremental forever backups. After the first backup, which is an Image Copy (hdbsql BACKUP DATA CREATE SNAPSHOT), Actifio does an incremental backup and incremental merge.

Yes. The backup admin or DBA can setup SLAs in such a way that transaction logs can be backed up every X minutes or hours in between the incremental backups. For example, a user can set up incremental backups every 4 hours and log backups every 15 minutes.

During recoveries, a user can specify any point-in-time. Actifio automatically identifies the nearest incremental point-in-time, mounts a synthetic virtual full copy instantly as of that point-in-time, and applies the archive logs to recover the SAP HANA database to the specified point-in-time. All of this is fully automated.

Actifio stores the SAP HANA backups in native format. This ensures that, after instant mount and recovery, there is no performance overhead because of format conversion.

The other factor to consider is the storage on which the SAP HANA backups are stored by Actifio.

Depending on the performance requirements after instant recovery or provisioning database clones to test/dev users, enterprises can specify the right storage tier to use with Actifio.

And lastly, Actifio also offers the flexibility to instantly mount and recover over Fibre Channel or iSCSI or NFS depending upon the user preference.

Thus the performance can be as good as the underlying storage and the protocol the user wants to use.