Actifio is now part of Google Cloud. Read the full announcement.

Efficient Data Centers – Barrier 3: Data Protection

brianr-100By Brian Reagan, VP Product Strategy & Business Development  – This is the third in a series of posts on Creating an Efficient Data Center that originally appeared on Enterprise CIO Forum. The five part series covers the key barriers that stand in the way of IT executives trying to move to their next generation data center model. The other installments will appear on this blog in the coming days.

Barrier 3 – Data Protection

Data Center ProtectionIt seems that every week, a new survey is published that highlights the gap between the desired state of resilience and the actual ability to protect, recover, and resume operations in a timely fashion.   This is not an artifact of poor planning or sloppy operations.   Rather, it’s a compounding effect of data growth, environmental complexity, and business tolerance (or lack thereof) for downtime.  Infrastructure sprawl, coupled with increased heterogeneity and complexity, and a spectrum of tools designed to serve point needs all lead us to a situation in which we’re having to make tradeoffs in terms of what applications get the highest – or even any – levels of data protection.   Unfortunately, that’s the reality than many businesses face today.   There are no longer enough hours in the day, appropriate and/or cost-effective technologies, and people on the ground to deliver 100% resilience – zero data loss, instant recovery, geographic protection – for all of their applications.

 

But, given your business, do all of your applications need 100% resilience as defined?  It’s vital to take inventory of your current and pending applications and develop an appropriate map of the service-level objectives for that application.  For example, is the transaction volume frequent enough to require mirroring of the data?  If you’re running payment processing or electronic exchanges, the answer is probably yes.  In the case of a manufacturing operation for automobiles, perhaps the requirement can be relaxed.  Can the business withstand a 24-hour time to restart the entire application infrastructure – server, storage, network, user access, support, etc.?  An evaluation of the tools currently in use against this service-level catalog will then identify gaps and/or overlaps in technologies – opportunities for decommissioning and savings.  Ultimately, the technology implemented should support flexible SLAs to be applied to all applications and host-types, both physical and virtual.  This is the culmination of the Application-Defined Data Center, a vision in which applications orchestrate the underlying infrastructure on their behalf in order to meet SLAs.  Many vendors are approaching parts of this problem through strategies under the heading of “software defined,” yet they still suffer from being infrastructure- versus application-centric.  This will be an important area for innovation and investment over the next decade.

 

This article originally appeared on Enterprise CIO Forum

 

Photo Credit: Arthur40A via Compfight cc

Recent Posts