By Brian Reagan, VP Product Strategy & Business Development – This is the first in a series of posts on Creating an Efficient Data Center that originally appeared on Enterprise CIO Forum. The five part series covers the key barriers that stand in the way of IT executives trying to move to their next generation data center model. The other installments will appear on this blog in the coming days.
It seems that we’re in the midst of another cycle in IT. The push towards outsourcing seems to be waning, as new technologies like virtualization and cloud put more cost-effective control in the hands of IT leaders. New, next-generation thinking around data center design and operations – some of which fueled by born-in-the-cloud companies like Google and Facebook – is also driving new perspectives.
The thinking has advanced, yet the inefficiencies that have plagued IT for decades remain. In fact, many of them have become more acute in the face of continued data growth, increased heterogeneity and complexity, and leaner operations. Whether you’re taking back control of a previously outsourced data center, or dealing with a sub-optimal current state, it’s important to understand the root cause of these inefficiencies and build a roadmap to mitigate them.
Of course, it would be better to eliminate these inefficiencies entirely. Yet, I’d argue that aiming towards improving efficiency is the wrong target. In a Management course in my first year of business school, the professor went to great pains to articulate the difference between efficiency and effectiveness. “Efficiency,” he said, “tends to be process focused and is about doing things better. Whereas effectiveness is about doing better things. It’s about the outcome.” In other words, we should be aiming towards a goal of a more effective data center. The continuous improvement efforts around efficiency certainly deliver positive results, but will not provide the transformation that many IT operations demand.
Let’s unpack some of the primary barriers to achieving a truly effective data center.
Barrier 1 – Copy Data Sprawl
It’s a fact that data growth rates continue unabated. But, as IDC described in The Copy Data Management Market: Market Opportunity and Analysis White Paper (ID #241047), the story behind the story is the growth rate of Copy Data, or separate copies of production made and kept for different uses. The fact is that we’re spending the majority of our storage budget – hardware, software, and operations – on managing copies of our mission critical data versus the actual source data. The result: dramatically over-built storage infrastructures, orphaned capacity, and limited visibility across the environment. The situation leads to longer-lead times for provisioning, which creates a bottleneck for growth/innovation-oriented projects. It’s also an economic disaster, as the return on assets and invested capital is negatively affected.
Getting ahead of the copy data sprawl is about first understanding the magnitude of the problem. IDC suggests defining your Copy Data Ratio (CDR), which is the amount of total data divided by the amount of production data multiplied by 100. IDC suggests that a score below 150 is optimistic, but as the number increases, the situation becomes more acute and the need for action increases. A score over 700 is considered a crisis. Plot this number against the number of separate systems and/or tools in use to support these copies of data, and you get the second dimension of the copy data problem. This also details the roadmap for consolidation. As IDC indicated, there are a rising crop of new tools and vendors building solutions to address the copy data problem.
Sign up for blog updates via email.Subscribe