By Brian Reagan, VP Product Strategy & Business Development – This is the second in a series of posts on Creating an Efficient Data Center that originally appeared on Enterprise CIO Forum. The five part series covers the key barriers that stand in the way of IT executives trying to move to their next generation data center model. The other installments will appear on this blog in the coming days.
Barrier 2 – Applications Development
There is a tremendous pent-up demand for new applications in the enterprise, from transforming legacy code to enabling new business initiatives. The tools to develop these applications have become easier to use and more powerful. Compute resources, either local or in the cloud, have also become more available and elastic. Unfortunately, the storage infrastructure underpinning the applications development, test, QA, and staging environments hasn’t necessarily kept pace. Consider a typical workflow in which a developer wants a copy of the production database to work against. Let’s assume that database is 2TB in size. The developer requests a copy of that database from the DBA, who in turn requests storage from the storage team. Given the typical turnaround time – measured in weeks – the DBA requests more storage than 2TB, just to be on the safe side. The storage team knows that the DBA pads their requests, but also know that it’s likely the DBA will come back with a different request tomorrow, and the next day. So, they provision the storage and wait for the next request.
This workflow transpires over 1-2 weeks and creates interruptions and downtime in the development schedule. And this workflow repeats itself for the user-acceptance testing team and again for the pre-production/staging environment. The result: delayed projects that could have a direct impact on revenue and/or customer satisfaction. In other words, infrastructure inefficiency translates into lost revenue or unhappy customers.
The roadmap towards addressing this bottleneck starts by driving virtualization beyond compute and network, into storage and even the underlying data. Virtual data copies are a particularly important point in this value chain. Once the data is virtualized, the time to provision those copies can be reduced from weeks to minutes, or even seconds, depending on the size of the data set. It’s important to ensure that the solution chosen can support true read/write access by development teams, and in a highly space-efficient manner. Otherwise, you’ve traded the problem of costly delays for another: storage over-spending.
This article originally appeared on Enterprise CIO Forum
Image Credit: Jlhopgood via Compfight cc