
As DevOps has become the preferred application development method, a key question – not a new one – has been around production data access. How quickly and safely can we furnish access to needed production data? That’s where data virtualization comes in.
A prime use for data virtualization is efficient data protection. Once virtualized, the data can serve multiple uses in backup, DR, and analytics, all while eliminating lots of extra physical copies we no longer need. All of that, and, increasingly, virtual data access solves that long-standing challenge for application developers.
Data virtualization is a natural progression to follow server and network virtualization. It’s an alternative approach that enables rapid development, testing, release and refresh of applications. When we say it can reduce provisioning times by as much as 90% the reactions are typically “show me” preceded by some colorful modifiers. But we’ve seen data virtualization accelerate development speed – by a lot. It helps automate workflows, enables on-demand data access and delivers provisioning measured in minutes instead of hours or days. Yes data virtualization substantially reduces storage requirements. It helps to protect sensitive data. But it can also help to comply with regulations, and incorporate automated data masking that eliminates expensive, time consuming manual process. That’s why developers have become virtual data enthusiasts.
Data virtualization introduces application development with an exceptional edge. It supports application development, test and operations pre-test simultaneously. It’s the power needed to reshape traditional mindsets and accelerate time-to-value capabilities.
Virtualized data gives DevOps a new independent potential. As applications decouple from infrastructure it means speed. When data recouples with infrastructure, speed impacts are multiplied. Self-service becomes viable. Reduced bandwidth, storage, and licensing costs follow with big infrastructure savings.
Virtualized data also presents an opportunity to improve enterprise information lifecycles. Faster development means faster time to market. Virtualized data creates better access with better protection, control, and cost. For application retirement it adds a unique twist that maintains the relationship of data to applications.
So, as you look at potential data virtualization platforms, consider scalability and performance. Can it handle terabytes-to-petabytes? Is performance assured without any impact on production? What operating systems, databases, physical and virtual systems are supported? How about data movement, migration and cloud integration? Is there industry standard data management and how straightforward are control tools? Beyond all of these, can the data virtualization platform handle new development, ongoing application management and eventual retirement through simple orchestration tools?
Other features to look for:
- Roles-based controls that enable self-service data access without DBA involvement
- Sensitive data controls, including data masking and roles-based access
- Consistency groups that coordinate multiple volumes, applications, data, etc.
- Coordination of database and log files to roll logs prior to mount
- Automated refresh of data
- Workflows to automate processes
Yes there may be skeptics, but proofs are clear from those who’ve adopted the data virtualization idea. Output quality improves. Customers are more satisfied. Developers are happier.
The potential is transformative.
[hs_action id=”13417″]