Close

Comparing Two SQL Server Restore Methods (2 of 3)

Here for Part Two in our series on SQL Server data protection, copy data management. Part one was all about SQL Server backup methods. Once you have the data, how do you use SQL copy data, what do you use it for?  Starting out with, people use it for recoveries, number one use case.  There’s two methods of doing those recoveries, one is what we call traditional restore and the second is instant mount.  So let’s compare and contrast the two, understand really what they do, and when you might want to use one versus the other.

 

To start, you have the SQL Server, it has some databases, it has some production storage, and it has some data volumes on it.  You have also some backup software.  And this backup software has some storage.  And on the backup storage, you have your copies of your database.  You may have many point-in-time copies, as well as having transaction logs there.  So when you want to use this data to do a restores what do you have to do.  Well, the backups here, they tend to be full’s, sometimes differentials, and then of course the transaction logs.  So to use it you initiate a restore.  And restore is going to say, take my backup imagine, copy it back to the server, which is then going to result in it getting stored back on the production storage.  That process is going to take however long it takes based on your infrastructure, your performance of this storage array, as well as the amount of data that you’re moving, of course.

Once that process is done, now you have a copy of your database on your production storage.  That database can be brought up and online.  It may be left in a restoring state, if you’re then going to use transaction logs.  And if you do, then what happens is the backup application typically will send those logs directly to the database, rolling it forward to further and further points in time, until it’s eventually brought online and you’re finished.  Those don’t typically get stored on the disk there as transaction logs, they get rolled in to blocks in your restored data file.  So, benefit of this restore approach to traditional restore approaches, when you’re done you’re done; your data is right where you want it, it’s on your production storage, it’s running on your server, and you can use it for anything that you want.

The downside is, lots of data movement, requires a lot of infrastructure, and a lot of time, right, very simple, very straightforward kind of a tradeoff there.  If you think about how you use this data for a restore, really there’s two types of restores.  There’s the 99% I need to restore somewhere other than into production, maybe it’s a production server with a new name, maybe it’s a different server.  Because you need to find some data that was accidentally deleted or that has changed or you need to prove something was there at a previous point in time.

The 1%, that’s the scenario where, “Oh no, I’m really down, my database has been corrupted.  I need to roll the production back to a previous point in time.”  So in that 1% scenario, the fact that this is long, and slow, and takes a long time, that can be devastating for large databases.  In the 99% scenario, it’s only mildly annoying, or maybe very annoying, depending on the side of your database.  But you copy that data back.  And one of the things that can make it a little bit more annoying is the fact that you may not have enough space for a large copy of your database where you want it to restore it.

So you might actually have to provision more storage, which means now you have a storage administrator involved, maybe a server administrator involved to get these all set up and talking to each other, so your restore can now send the data here, just so you can then do some queries, and then ultimately delete that when you’re done and you’ve retrieved the data that you want.

The alternative to all of this is the Instant Mount.  Now, the Instant Mount is a totally different approach of using SQL Server data from a backup, because it’s doing just that.  It’s using the SQL Server data from the backup.  And what that means is that the backup server is going to present over to the SQL Server the storage with the database on it, and the database server is going to bring that online directly from the connection from the backup server.  Now, the performance of that connection is usually the first thing that people as about.  It’s pretty obvious, I think, that if you’re presenting this directly your recovery times are directly impacted and are dramatically reduced, right.  The time it took the copy a 5 terabyte database disappears.  You’re now just bringing a 5 terabyte database online from the backup server.

The attention shifts to the performance and that’s when you realize what kind of disk you used for your backup target.  You probably used the cheapest, slowest disk that you could find because, well, you wanted to save as much money as you could on your backups, everybody does.  So now, the performance of this is going to be driven off of that disk.  But not just that disk, it’s also driven off of the connection from your backup server to your SQL server which may or may not be as high speed a connection as your storage connections are.  Maybe these are fiber channel and this is going over the network; it varies from case to case.  You may also find that the storage here is not just cheap, slow disk, but it’s cheap slow disk with deduplication running on top of it.  So now you have another layer that reduced your storage footprint to save cost, but at the expense of random read performance, which is exactly what SQL Server needs.

Now, don’t get me wrong.  There are some storage arrays that are deduplicated arrays that are tier 1 arrays that have built-in deduplication and they perform fantastically.  Those aren’t the arrays you’re usually using as a backup target.  I’m talking about your cheap and deep deduplicated backup target dedupe storage.  So now you have not just slow disk, but on top of that dedupe, and then on top of that you have the backup format itself.  Backup software stores data typically in a backup format.  Sometimes it’s a two-layer backup format.  It’s the Microsoft SQL Server backup format, and then on top of that it’s in another format from the backup application itself.  And so you end up with a translation process that has to take place between the actual IO requests from the server and where they’re getting retrieved from a backup format here on deduplicated disk that’s a slow physical disk layout.  So your performance can suffer.

Now, for a large database, what’s going to be better if you just need to run some queries and retrieve a few pieces of data, absolutely.  Do the instant mount; get it from here, save yourself the hassle of creating a whole extra set of disk of many terabytes, potentially connecting it and all the people that need to get involved with it.  On the other hand, for your recovery after a true disaster, where you need to rollback to a previous point in time, doing a mount from something that’s not going to perform well isn’t really a lower RTO if it’s not going to be able to support even close to a production workload.  You may get by for a limited period of time with a reduced IO performance, but not for very long.

In a true recovery, you might want to do the traditional restore and wait it out.  If you’re thinking about all this upfront, which you obviously are now, you’re watching this; you may realize you could probably get the best of both worlds.  If you have a solution that can store your data, maybe still on inexpensive disk maybe not, depending on how you want to use it, but if it can avoid that translation layer, and maybe even avoid the deduplication layer, you have a real chance at having a usable performance of that database on having it available very quickly.  So you really get a little bit of the best of both worlds if you can do something that might store it in a native format instead of in a backup format, for example.

You can architect the solution that can do that, and of course, paying attention to the infrastructure between these plays a major role as well.  Can you use the same kind of high-speed connection that you use for your storage today or does it have to be something slower, more complicated?

Instant mounts are really good for your partial recoveries, for your explorations.  Full restores tend to be better for your full restores in place in most cases, unless you have some way to move the data in the background, something like maybe a Storage vMotion, that might be an option there.  But once you realize the power of the instant mount, with or without any of these layers, you start to realize there might be some more that you can do with that data besides just recoveries.  And that’s where part three of this series is going to take us.  We’re going to explore what else can we do with that data, and what other features and functionality might you need out of this solution to enable you to use that copy for more than just recoveries.


Evaluating SQL Data Protection Vendors?  Check out this free Vendor Evaluation Checklist

Download Now

Privacy Preference Center

Required Cookies

These cookies are required to enable core site functionality.

_icl_current_language

Functional Cookies

These cookies allow us to analyze site usage so we can measure and improve performance.

DFTT_END_USER_PREV_BOOTSTRAPPED, driftt_aid, driftt_sid, muxData, __hssc, __hssrc, __hstc, hubspotutk, __unam, __utma, __utmb, __utmc, __utmv, __utmz, _icl_current_language, gdpr[consent_types], __d_hsutk, gwcc, __utmt, gdpr[allowed_cookies], wordpress_test_cookie, wp-settings-1, wp-settings-time-1, __distillery
dpr, wd, presence, c_user, fr, xs, act, pl, sb, datr
pl, fr, dpr, datr
pl, fr, dpr, datr
pl, fr, dpr, datr