Close

Flash Time

Customers tell us that flash storage gives them time. Apparently lots of it. In some ways that’s predictable. And when they take a systems view, combining a broad view of multiple technologies including data virtualization and flash, it gets really interesting.

—–

This is the second post in a series on Flash Storage and our view on the intersection of this technology and data virtualization. To start at the beginning, read the first post “More Flash From Flash.”

—–

To start with the obvious, flash is touted as orders of magnitude faster than spinning disk. It moves transactions from milliseconds to microseconds. With no moving parts it’s reported to be more reliable as well as more energy efficient. All predictable stuff. But when customers start applying it to their real environments, they’ve been telling us about some surprises.

New demands for performance, flexibility, capacity and cost have been stretchingTime-FB-300x249 the basics of IT design for a while now. Add in the growing demands of cloud, big data and mobility and “the way we have always done it”, doesn’t. So, with performance as a central theme, flash gets installed. It’s 20 times faster than spinning disk and you can fit 24 terabytes into the space of a pizza box. (Why a pizza box seems to be a standard IT unit of measurement is a mystery).

For some high performance applications this means that a half dozen storage racks filled with short-stroked drives can be replaced in a small part of a single rack. Beyond the space factor, the economics of that trade become much more compelling and not only because of the extreme performance. Now all of the capacity gets used as well. For example, one service provider installed five terabytes in a few inches of rack space instead of installing 1300 short stroked spinning disks across several floor tiles. They got the high performance at a fraction of the expense. Yes it’s counter-intuitive but it’s also happening.

As flash capabilities take hold, customers report seeing application transaction and batch processing times reduced by 80 percent and more. They see that it’s possible to handle more transactions and to get more business done. Then, when combined with the impact of data virtualization, they begin to see less friction where the application meets the infrastructure. More and faster test and development is happening. They’re no longer making clone after clone. Workflows get accelerated.

It’s speed on speed.

We know of one customer, a Service Provider, that estimates they saved as much as 2000 hours in test/dev. time over the last year. For them it’s a formula for competitive advantage and a practical example of systems thinking. The idea is that faster storage transactions have beneficial impacts up and down the stack. Applications are enabled in ways that simply wouldn’t scale using traditional spinning disk.

Up until now, most of the flash discussions have been very focused on the highest performance use cases. Then, as data virtualization reduces the total storage footprint, the lower demand for capacity gets people thinking about broader uses. When the entire stack is evaluated, fewer high performance servers are required and demand for software licenses are reduced along with environmental costs for power, cooling and floor space. One IT director compared it to a plate of spaghetti. You don’t judge a single strand he said, but the entire presentation. (Seems these guys think a lot about food.) This virtuous circle led him to scrap plans for an expanded data center, which eliminated construction, operating costs and substantial staffing expense.

We’re also hearing that flash has another set of benefits that may not be immediately obvious or fully quantifiable. Overhead is reduced because it’s simpler to manage. Storage admins can concentrate more on adding value and less on the operational mechanics. That has some thinking about “all flash” conversions for every production application. Using that virtual golden data copy to serve every core requirement. Let spinning disk be relegated to background tasks.

All of this means that how we think about costs, staff time and tasks can be reordered. What is the time-value when application performance problems no longer create fire drills for staff from every discipline? Suppose there’s no need for all the storage performance trivia of optimizing, tuning, monitoring etc.? Then suppose that all that performance improvement makes end users more productive. What if it now becomes possible to build an application that wouldn’t scale using traditional storage? Not every flash user is telling us they have already seen lots of unexpected flash or flash/data virtualization time-benefits. But all are seeing some and expect to discover more.

Call it the flash-time effect.

Photo credit: hjl