In Part 2 of this blog series, we discussed what’s required to provide incremental-forever backup for large databases and how Actifio does it, specifically for Oracle. In Part 3, you’ll learn how Actifio provides incremental-forever backup for Microsoft SQL Server and SAP HANA.
As a reminder from part 2, in order to capture incrementally-forever, you need three things:
- A way to get the database to a consistent state on disk so that a snapshot will have data integrity and consistency.
- A way to take a snapshot of volumes, to serve as a temporary, stable source from which to retrieve data – initially a full copy and then the changed blocks ever after.
- A way to track changed blocks between two points in time.
Microsoft SQL Server does not have the same built-in mechanisms for changed block tracking as does Oracle. In fact, no other database has that. So the challenge in doing incremental-forever, block-level backup for SQL Server is a bit higher. Microsoft does give us some help by providing a standard way to get that stable copy of the disk from which to copy the changed blocks. That is achieved by Windows’ Volume Snapshot Service (VSS), also known as Shadow Copy, which creates an application-consistent snapshot of a Windows disk. That gives us half of what we need. But we still need to know what blocks have changed.
In order to get the changed blocks, Actifio leveraged Microsoft’s framework for building filter drivers – software that can tap into the I/O stack. We look at writes that are directed to any of the SQL Server files and we mark the blocks of those files that have changed. Thus, we maintain a map of bits for these files, where a bit is set whenever the corresponding block is changed. These bit maps are very small and efficient, reside in memory, and therefore this whole operation is very fast with negligible impact on the host. When backup time arrives, we use VSS to create a snapshot of the disk and then use our bitmap to copy the file blocks that have changed. In clustered SQL Server environments, this requires a bit more work – we maintain these bitmaps on all the hosts because a database can move among hosts in between backups. In such a case, the blocks that need to be copied are an aggregation of all the blocks which changed on all the hosts where the database happened to run. All of this is handled automatically for the user.
SAP HANA is an in-memory database, which brings up the question: how do you get an in-memory database to an application-consistent state where you can snapshot its disk image and copy changed blocks? Luckily, SAP has realized that this would be needed for effective backup and has built in a framework for storage snapshots, which include all the data that is required to recover the database to a consistent state. SAP HANA runs on Linux, which does not have an exact equivalent of Windows VSS but is nearly always used with Logical Volume Manager (LVM). We leverage the snapshot capabilities of LVM to get a temporary, stable source for copying the needed data, so between SAP and LVM we have covered 2 of the 3 requirements for incremental-forever backup.
The third requirement is, of course, to be able to track changed blocks. Again, there is no easy, universal way to do this so we developed our own changed block tracking driver for Linux. It works a bit differently from the Windows driver in that the Linux driver tracks changes at the volume level rather than at a file level, but the principle is similar. When it’s time for a backup, we call SAP HANA to prepare for a storage snapshot, we call LVM to create the underlying volume snapshot, and we use the bitmap to copy only the changed blocks. Once the changed data is copied and the backup completes, we remove the LVM snapshot as it is no longer needed – the bitmap will tell us which blocks to copy next time.
The nice thing is that this process can be applied to any database (as long as it can be put in a consistent state for a snapshot) and that’s how we support IBM Db2, MySQL, MariaDB, PostgreSQL, SAP ASE and IQ, MongoDB, and other databases. All with incremental-forever backups, and near-instant access to any point in time backup.
Our customers found it nice, too, that they can backup a 100 TB database every day in 2-3 hours, not to mention recovery time in minutes instead of days or weeks. We had to do a lot of heavy lifting in the last 10 years, but all this hard work definitely pays off. So, regardless of the database of your choice, Actifio can support incremental-forever backup, whether the database is small, medium, large, or very large (100+ TB) in size!