Big Data. Everyone’s still talking about it – and it shows no sign of stopping as the use of computers, mobile devices and the Internet continues to increase.
For businesses, the key to understanding Big Data is to accept that it is not a class or type of data. It is used to describe the analysis of large volumes of various types of data. Big Data is also a trend covering multiple new approaches and technologies for storing, processing and analyzing data.
For many, it is seen as the Holy Grail for businesses today. It will enable organizations to understand what their customers want and target them to drive sales and growth. The Big Data trend has the potential to revolutionize the IT industry by offering new businesses insight into the data they previously ignored.
However, in an age where Big Data is the mantra and terabytes quickly become petabytes, the surge in data quantities is causing the complexity and cost of data management to skyrocket. At the current rate, by 2016 the world will be producing more digital information than it can store
So, what’s the real issue here?
The problem of overwhelming data quantity exists because of the proliferation of multiple physical data copies. IDC estimates that 60 per cent of what is stored in data centres is actually copy data – multiple copies of the same thing or out-dated versions. The vast majority of stored data are extra copies of production data created by disparate data protection and management tools like backup, disaster recovery, development, testing and analytics.
IDC estimates up to 120 copies of specific production data is being circulated by a company whereby, the cost of managing the flood of data copies reached $44 billion dollars worldwide. As a net result, the management of this issue within a company is now taking more resources than the management of the actual production data.
While many IT experts are focused on how to deal with the mountains of data that are produced by this intentional and unintentional copying, far fewer are addressing the root cause of data bloating. In the same way that prevention is better than cure, reducing this weed-like data proliferation should be a priority for all businesses.
Enterprise IT heads tend to have similar key strategic priorities – improving resiliency, increasing agility, and moving toward the cloud to make their systems more distributed and scalable. Often they are held back by traditional software and hardware.
Copy data virtualization – freeing organizations’ data from their legacy physical infrastructure just as virtualization did for servers a decade ago – is increasingly seen as the way forward. In practice, copy data virtualization reduces storage costs by 80 per cent. At the same time, it makes virtual copies of ‘production quality’ data available immediately to everyone in the business that needs it. The result is that data copies no longer take up server space and time spent managing it.
That includes regulators, product designers, test and development teams, back-up administrators, finance departments, data-analytics teams, marketing and sales departments. In fact, any department or individual who might need to work with company data can access and use a full, virtualised data set. That’s powerful, because it allows for rapid prototyping, testing and innovation using a complete data set rather than a subset or sample. This is what true agility means for developers and innovators.
Moreover, network strain is eliminated. IT staff – traditionally dedicated to managing the data – can be refocused on more meaningful tasks for growing the business. Data management licenses are reduced, due to no longer requiring back-up agents, de-duplication software and WAN (wide area network) optimization tools.
By eliminating copy data and working off a ‘golden master’, storage capacity is reduced – and along with it, all the attendant management and infrastructure overheads. The net result is a more a streamlined organization driving innovation and improved competitiveness for the business.[hs_action id=”13357″]