This is the third and final post in a series on Flash Storage and our thoughts on the intersection of this technology and data virtualization. To start at the beginning, read the first post “More Flash From Flash.”
Some things about IT appear to be universal. Just listen to the words we most consistently hear from IT execs: “complex and challenging”. So, in their eyes, anything that makes it simpler, faster, less expensive and more reliable is welcome relief. That explains why Flash Storage, is catching on so quickly. It fits the bill of simpler, faster and more reliable. Less expensive – not so much. Still, they see compelling benefits in flash and, despite the painful price, many organizations have made the decision to adopt. Some in a big way.
The next pain they all face is moving data from the old kit.
The stats we hear about 50-60% annual data growth have been consistent for a number of years now. So we know that most companies are spending a fortune on disk, space, power and people to manage it all. Then there are the hundreds of extra copies they don’t use that clog up the system. And while data is growing out of control we know that budgets aren’t. We also know that the connected world has no tolerance for slow systems or any downtime. We know that big organizations are boiling over with disparate technology stuff. And we know that many companies are desperate to get out of that technology soup.
All big motivators for efficient, faster, safer and less expensive.
We are seeing Flash is entice converts because the always-on business cycle requires immediate systems response at high performance levels. They’re looking for absolute business confidence in service level results but can’t bear to create yet another separate technology stack. If anything, they want to wipe out some of the old ones. If Flash helps to create a multi-purpose capability, that makes it ideal.
We have seen evidence of this in recent customer conversations showing movement toward that kind of multi-purpose potential. It’s a rising awareness of benefits from data virtualization coupled flash. It typically starts when the use of virtualization improves data management. Next, more consolidation becomes feasible. Creating an all-flash environment also means that storage provisioning is much simpler. Everybody gets the fast stuff so performance can be assumed. The only discussions left are around capacity. And all of this can make the economics of flash storage more attractive.
Great. Now, all we need to do is move that data from the spinning disk to the shiny new Flash array. It’s every storage admin’s favorite thing – data migration.
If you’ve ever done it, you know data migrations are some of the most painful projects IT faces. From storage consolidations to new storage arrays to relocated or consolidated data centers, data migration from existing to new systems has typically been a long, complex, frustrating and expensive nobody-having-fun challenge. But there is a hidden bonus if they have already been reducing copies with data virtualization. There is less to move and virtualized files greatly speed and simplify the migration. In fact, virtualization becomes the mechanism to make the move.
Take this customer example. A major university Law School program was being transferred to a different school. The data migration needed to happen without down time. Because virtualization had substantially reduced required capacity, there was less to move. So, it was also decided that the expense was sufficiently reduced to make the move to all flash storage arrays. And the best thing was that the capability to make it simple was already in-house. The new university location had Flash Arrays installed and was connected over their network to the virtualization appliance. That made the remote flash visible as if it were a recovery destination. With a click to restore, the servers and data were moved without disruption. Users skipped a day of downtime and the IT team skipped weeks of involved planning and migration pain. Better yet, because migration is not an occasional requirement, they now use this approach any time they want.
Think about that for a typical large enterprise already managing petabytes of data and growing at 40-60% annually. The normal refresh cycle will introduce storage arrays needing daily data movement of terabytes. Decisions on just what data should move are simplified. So is consolidation of servers and networks. Data virtualization does double duty. It reduces data volumes and does the migration too. The cost of flash may be scary but migration to it doesn’t need to be anymore.