Close

How to Develop Higher Quality Applications Faster with DevOps and Copy Data Virtualization

If you’ve ever worked anywhere near an enterprise IT department, you’re familiar with the internal scuffle that exists  between the development and operations teams. At no fault of their own, they’re doing what they’ve been charged to do. Dev, tasked with building new applications to advance the business, needs to get things done faster. At the same time, Ops is pushing for things to go slower, tasked with safeguarding data and infrastructure to protect the business. With totally opposing missions, it’s no wonder why things get done at a less efficient pace than ideal.

But there’s a wave currently sweeping across enterprise IT organizations. The days of conflict between the dev and ops teams are finally winding down, and an olive branch of peace is being offered in the form of a new working methodology. Enter DevOps.

“Fundamentally DevOps is about taking the behaviors and beliefs that draw us together as people, combining them with a deep understanding of our customers’ needs, and using that knowledge to ship better products to our customers,” says Adam Jacob, CTO and cofounder of Chef.

He goes on to define the concept, saying, “DevOps is a cultural and professional movement, focused on how we build and operate high velocity organizations, born from the experiences of its practitioners.”

devops_speed_imgGoing deeper than just a set of tools or the way projects are managed, organizations implementing DevOps must change the way they approach IT management. Highly successful organizations who have adopted DevOps combine the dev and the ops teams and enmesh them into one connected team with a shared mission. Dev is no longer just responsible for the software and ops is no longer just responsible for the infrastructure, with each pointing the finger at the other when something goes wrong–both have to answer to the success of the application and the business outcome. The result is higher quality applications with fewer defects and a larger impact on the company.

DevOps has the ability to completely transform the way an organization builds and ships code and has a material impact on the bottom line as well. Higher quality applications brought to market more quickly means more money coming in faster.

But while DevOps can improve the speed in which people work, there’s still the issue of how fast the machines and processes can work. Unfortunately for most enterprises, large datasets can take hours, if not days to provision copies according to Gartner, and in some cases very large datasets can even take weeks. For developers looking to obtain copies of production data for testing purposes, the waiting game creates a huge bottleneck and a large source of tension between the dev and ops teams.

In attempts to sidestep this problem, teams often employ dummy data, which avoids the provisioning time, but can lead to lower quality applications. Because dummy data isn’t a true representation of the data in production, bugs that aren’t able to be found in testing and QA can often arise once an application is released in the wild, creating further issues down the road, and ultimately a poor user experience.

To combat these issues, test data management solutions have begun to pop up across the technology landscape in hopes of shrinking data provisioning times and reducing the dependence of the development team on the operations team. One of these new technologies, copy data virtualization, is the missing piece in the DevOps puzzle. It works by creating a single, golden master copy of production data, from which a nearly unlimited amount of virtual copies can be spun up almost instantly.

With copy data virtualization, ops can provide self-serve, instant data access to dev by taking themselves out of the equation. The end result is higher quality applications delivered faster. Coupled with a DevOps framework, copy data virtualization provides the missing link to allow for a powerful business transformation.

With dev not having to deal with bottlenecks in data provisioning, and with ops not having to worry about providing copies of bulky production data sets, they’re able to focus more on the problems that matter most to the business. In that case, everyone wins.

[hs_action id=”13357″]