Big Data. It’s the buzzword of the moment. And it’s got everyone talking. The Big Data trend has the potential to revolutionise the IT industry by offering businesses new insight into the data they previously ignored. For many, it is seen as the Holy Grail for businesses today. For organisations, it’s the route towards better understanding exactly what their customers want – and allows them to respond appropriately.
[easy-tweet tweet=”#BigData is more than just a #tech buzzword, it has the potential to revolutionise the #IT Industry”]
In an age where Big Data is the mantra and terabytes quickly become petabytes, the surge in data quantities is causing the complexity and cost of data management to skyrocket. At the current rate, by 2016 the world will be producing more digital information than it can store. Just look at that mismatch between data and storage. For reference, one zettabyte would fill the storage on 34 billion smartphones.
by 2016 the world will be producing more digital information than it can store
The challenge
The problem of overwhelming data quantity exists because of the proliferation of multiple physical data copies. IDC estimates that 60% of what is stored in data centres is actually copy data – multiple copies of the same thing or out-dated versions. The vast majority of stored data are extra copies of production data created every day by disparate data protection and management tools like backup, disaster recovery, development, testing and analytics.
[easy-tweet tweet=”An estimated 60% of the #data we currently store is useless copy data” user=”comparethecloud”]
IDC estimates up to 120 copies of specific production data is being circulated by a company whereby, the cost of managing the flood of data copies reached $44 billion dollars worldwide.
Tackling data bloating
While many IT experts are focused on how to deal with the mountains of data that are produced by this intentional and unintentional copying, far fewer are addressing the root cause of data bloating. In the same way that prevention is better than cure, reducing this weed-like data proliferation should be a priority for all businesses.
why aren’t we addressing the root cause of data bloating?
The ‘golden master’
Data virtualisation – freeing organisations’ data from their legacy physical infrastructure just as virtualisation did for servers a decade ago – is increasingly seen as the way forward. In practice, copy data virtualisation reduces storage costs by 80%. At the same time, it makes virtual copies of ‘production quality’ data available immediately to everyone in the business anywhere they need it.
Data virtualisation – freeing organisations’ data from their legacy physical infrastructure
That includes regulators, product designers, test and development teams, back-up administrators, finance departments, data-analytics teams, marketing and sales departments. In fact, any department or individual who might need to work with company data can access and use a full, virtualised data set. This is what true agility means for developers and innovators.
Moreover, network strain is eliminated. IT staff – traditionally dedicated to managing the data – can be refocused on more meaningful tasks for growing the business. Data management licences are reduced, due to no longer requiring back-up agents, separate de-duplication software and WAN (wide area network) optimisation tools.
[easy-tweet tweet=”By eliminating physical copy #data and working off a golden master our storage problems are over”]
By eliminating physical copy data and working off a ‘golden master’, storage capacity is reduced – and along with it, all the attendant management and infrastructure overheads. The net result is a more a streamlined organisation driving innovation and improved competitiveness for the business faster.
Ashutosh remains an avid supporter of entrepreneurship and is an advisor and board member for several commercial and non-profit organizations. He holds a degree in Electrical Engineering and a Masters degree in Computer Science from Penn State University.