Actifio is now part of Google Cloud. Read the full announcement.

Effective Test Data Management: Music to a DBA’s Ears

There is no better way to hear from potential end-users than at a Trade Show, and  Oracle Open World (OOW) in San Francisco is a great example of this. At our booth last year we had the opportunity to talk to teams of DBAs from a very wide range of enterprise organizations and hear about their day to day frustrations.

A very common issue is the need to co-ordinate ad hoc restores for the Test, Dev or QA teams.   The process routinely forces DBAs to open a change ticket; wait for a storage admin to release more storage (if they have it); a server admin to discover this storage and create a restore point and for a backup admin to perform the ad hoc restore.

This causes 2 problems for DBAs:

1) The whole process can take a lot of time in coordination, creating friction with their internal end users who need access to data very quickly. The LOB (Line of Business) team  often blames the DBA team for slow response times.  

2) The ad hoc backups and restores  also need extra storage, and with tight IT budgets this is consistently a challenge and causes friction with the internal IT team.

data_abstract_imgBottom line – the DBA team is being beaten up by 2 teams: The Dev & QA teams from the LOB, and the IT storage team. So a common DBA question at the Actifio booth was – “How can Actifio help solve these problems?”.

Our reply was very simple:  Imagine a world where you have a simple CLI (Command Line Interface) or API access at your disposal. And with that CLI, you are in full control of managing and accessing copies of databases in a self service manner without having to open tickets, or talk to the storage admin or the server admin or the backup admin. With just a few commands you can:

  1. List the databases that are being protected by Actifio
  2. List the various point in time images of a specific database
  3. Provision a “virtual” database “instantly” on a machine in a self service manner. It does not matter whether your database is 1 TB or 30 TB. Since Actifio provisions “virtual” databases, it does it instantly.   In fact provision as many virtual databases as you want to instantly. So you could provision 5 virtual copies or 10 virtual copies. They can be performed in parallel and hence takes the same amount of time. And it does not consume any extra storage since these are virtual databases.
  4. Refresh the virtual databases with incremental changes that happened in production. This works with file system, RAC, ASM environments over both Fiber Channel and IP networks.

This is music to a DBAs ears.  Application managers can accelerate their dev test cycles faster because the DBA team are able to provision virtual databases instantly. The IT storage team will see a reduction in storage needs as well.  And the DBAs see improved productivity,  enabling them to focus on more important production related tasks instead of executing time consuming & boring cloning and restore operations.

So how do we do this?  How to example this self-service world for our DBAs?   Enter actdbm…

What is actdbm?

Think of it this, way, when you implement a new product do you really want to:

  • Learn how to use a new vendor GUI?  Probably not.
  • Use SSH keys and bash scripts?   Maybe, but only as part of a bigger program.   And I don’t want to store passwords in the clear.
  • Use REST API calls?   Probably, but I would rather just pick up simple existing functions and embed them.

We want the power of CLI and REST API without the learning period.  So Actifio created ActDBM, a set of perl scripts that let you automate all the essentials tasks with a very simplistic language that needs no SSH keys, doesn’t store passwords in the clear and takes almost no effort to learn.

Installing actdbm.pl

To use actdbm.pl all we need to do is install the Actifio Connector, a light weight piece of software, that you can download by just pointing your web browser at your Actifio Appliance and download the Connector for your relevant Operating System.

For Linux OS with Yum, you could use a simple command like this as well (just swap the IP address 172.24.102.154 for your own Actifio Appliance):

yum localinstall http://172.24.102.154/connector-Linux-latestversion.rpm

Work in the actdbm directory

Because the default install folder is not in the PATH we need to use the full path.   We could always move actDBM to a folder in your PATH.

[oracle@demo-oracle-4 ~]$ cd /act/act_scripts/actdbm

List options

We really need to know only one command:   actDBM.pl

It will then lead you by the nose.   If you run it, it will tell you that you need to use the  –type parameter and then a type option ,   such as:     actdbm.pl -type backup

[av@av-oracle actdbm]# ./actDBM.pl

Usage:actDBM

   --type

        <backup>

        <restore>

        <clone>

        <mount>

        <cleanup>

        <listImageDetails>

        <runwf>

        <createliveclone>

        <refreshliveclone>

        <restoreASMswitch>

        <restoreASMrebalance>

        <cdsconfig>

Store Actifio Appliance details

The first thing we want to do is store the details of our Actifio Appliance using the cdsconfig type.   To learn what is needed, just enter the first parameter, which is always –type:

[av@av-oracle actdbm]# ./actDBM.pl --type cdsconfig

Usage: perl actDBM.pl --type cdsconfig

--username <CDS username>

--password <CDS password>

--CDS <CDS details>

Now we perform the necessary command to store the Appliance details.

Yes you will enter the password in the clear, but it is stored in a hashed and secure fashion  (and yes, I know passw0rd is a terrible password – don’t hate me).

[root@av-oracle actdbm]# ./actDBM.pl --type cdsconfig --username admin --password passw0rd  --CDS 172.24.1.180

user details are stored successfully

Once we have stored the credentials, we can just refer to our Actifio Appliance using the –CDS parameter (which also works for Actifio Sky).

List images for a DB

In this example we have a production host called Oracle-Prod and running on that host we have an Oracle database called bigdb.

So let’s explore images for our production database bigdb on Oracle-Prod.  We do that with the following command:

[av@av-oracle actdbm]# ./actDBM.pl --type listImageDetails --CDS 172.24.1.180 --dbname bigdb --hostname Oracle-Prod

Database Name: bigdb

Host Name: Oracle-Prod

Backup Image and Recovery Range:

Snapshot Pool Images:

Image_25785750:

RecoveryST: '2016-06-25 12:03:52'  RecoveryET: '2016-06-25 12:03:52'

Image_25787410:

RecoveryST: '2016-06-26 00:03:47'  RecoveryET: '2016-06-26 00:03:47'

Image_25790014:

RecoveryST: '2016-06-26 12:03:50'  RecoveryET: '2016-06-26 12:03:50'

Note in this example the Recover Start Time (ST) and End Time (ET) are the same as we are not collecting logs for this DB.  We have also only shown the first three images to reduce the length of the output.

If we look at the example below from a second database called smalldb where we are collecting archive logs in between each Oracle RMAN L1 incremental forever snapshot, you can see the recovery range.   Now you may notice the Recovery End Time and End Sequence number is the same for each image, which makes perfect sense as we make the full log range available for every image which includes images in our deduplication pool.  Again we have also only shown the first three images in each pool to reduce the length of the output.

[av@av-oracle actdbm]# ./actDBM.pl --type listImageDetails --CDS 172.24.1.180 --hostname Oracle-Prod --dbname smalldb

Database Name: smalldb

Host Name: Oracle-Prod

Backup Image and Recovery Range:

Snapshot Pool Images:

Image_25839320:

RecoveryST: '2016-07-03 00:10:34'  RecoveryET: '2016-07-04 20:09:38'

Thread1:     StartSequence: 13185      EndSequence: 13269

Image_25841040:

RecoveryST: '2016-07-03 12:11:09'  RecoveryET: '2016-07-04 20:09:38'

Thread1:     StartSequence: 13207      EndSequence: 13269

Image_25842716:

RecoveryST: '2016-07-04 00:10:43'  RecoveryET: '2016-07-04 20:09:38'

Thread1:     StartSequence: 13230      EndSequence: 13269

De-dup Images:

Image_25524341:

RecoveryST: '2016-06-12 12:08:11'  RecoveryET: '2016-07-04 20:09:38'

Image_25734039:

RecoveryST: '2016-06-19 12:08:20'  RecoveryET: '2016-07-04 20:09:38'

Image_25790261:

RecoveryST: '2016-06-26 12:12:32'  RecoveryET: '2016-07-04 20:09:38'

Let’s run an on demand backup

If we decide we need a new point in time, maybe because we are about to do a production upgrade, let’s run an on-demand snapshot:

[av@av-oracle actdbm]# ./actDBM.pl --type backup --dbname bigdb --hostname Oracle-Prod --CDS 172.24.1.180

policyid=8632

Job_25802477

[root@av-oracle actdbm]#

Let’s access an image

This is the fun part.   We have listed the images, we have created an image,  let’s use an image.   The command may look long, but actually most of it is just telling the Actifio Connector where to find ORACLE_HOME in case we need to use a different folder.   We define a new SID for our mounted image (I called it billed) and a target host (demo-oracle-4).   The end result will be a new instance of a 2 TB Oracle database, made available in less than three minutes, but consuming no additional storage.

[av@av-oracle actdbm]# ./actDBM.pl --type clone --optype newclone --dbname bigdb --sourceHost Oracle-Prod --TargetDBname billed --OSusername oracle  --targetHost demo-oracle-4 --TargetOracleHome /home/oracle/app/oracle/product/11.2.0/dbhome_1 --listenerpath /home/oracle/app/oracle/product/11.2.0/dbhome_1/network/admin --protectNewApplication no --recovery yes --CDS 172.24.1.180

backup arrary is HASH(0x27e7088)

backup name is Image_25802477

db name is bigdb

Image ID 25802479 name Image_25802477

image id is 25802479

image name is Image_25802477

Image details by name

Job_25802582 to mount Image_25802477 started

Meanwhile over on the target host (demo-oracle-4), we login and find a new Oracle instance:

[oracle@demo-oracle-4 ~]$ ps -ef | grep pmon

oracle   22319 1  0 21:41 ?     00:00:00 ora_pmon_billed

Think about what this command gives us:  Near instant access to an off-production copy of a large production database with no need to buy or provision extra storage, engage the storage admin, the backup admin or the DBA.   Developers can get straight to the data.   We can also re-run this command with –optype refresh to update our mounted copy and if you think that this command is too long, I agree!   Which is why we can embed nearly all of this once off in a workflow using the Actifio GUI and just call that instead. We just call the workflow instead:

[av@av-oracle actdbm]# ./actDBM.pl --type runwf --subtype directmount --dbname bigdb --hostname Oracle-Prod --wfname EverWF --CDS 172.24.1.180

backup arrary is HASH(0x2db3000)

backup name is Image_25802477

db name is bigdb

Image ID 25802479 name Image_25802477

image id is 25802479

image name is Image_25802477

WF Id = 25763787

mount id: 25803103

database name is

clone work flow id: 25803199

Remember that everything you have seen here is using Restful API calls via a pre-built CLI.  All the heavy lifting has been done for you.   You could call these using your own scripts and integrate them into a portal.   Or you can use your own REST API scripts.   Or you can use CLI over SSH.   Or you can use the Actifio GUI.  The choice is yours.

[hs_action id=”13357″]

Recent Posts