Is the ‘new oil’ analogy appropriate, and what data trends should enterprises pay attention to this year?

According to predictions from leading market research firms, it appears that data is fulfilling its promise as the new “oil” of the digital economy.

If data is the new oil, then the safe, secure and reliable storage of this “oil” is more critical than ever. This is especially true in the hyperconnected world of today, where a single slip-up could lead to multi-million dollar losses and erosion of customer faith and confidence.

DigiconAsia asks Joe Ong, Vice President and General Manager, Hitachi Vantara ASEAN, for his take on the “oil” analogy and other data trends, and how enterprises large and small can take advantage of them to further grow in the new year.

In your opinion, what technology trends would impact user organizations in APAC (or ASEAN) the most in 2020?

Ong: I expect that 5G will emerge as the “game changer” of 2020. APAC is leading the charge when it comes to 5G, with countries such as South Korea, Japan and China set to invest over US$280 billion on 5G from 2019 to 2025. Mobile handset and laptop manufacturers are also already increasingly building in 5G compatibility for their devices.

Beyond the expected influx of data, both in terms of variety and volume, we will also likely see edge computing gaining ground in the region, and potentially becoming a necessity for enterprises. Processing data centrally on-premises or in the cloud simply will likely no longer be enough to get all that data processed fast enough for impactful decisions to be made.

Firms are beginning to understand the power of edge computing and the applications made possible by processing raw data closer to the source. With edge computing companies of all sizes will be able to obtain critical real time updates and make business decisions faster.

The combination of 5G and edge computing in 2020 will also supercharge the influx of IoT devices, as organizations leverage more devices to augment operations, a trend that analysts have spoken about for some time.

At the heart of all these developments and trends, the ability for businesses to monetize their data in a much faster yet secure way will also be critical for them to stay ahead of their competition. We are already seeing more organizations investing in multiple technologies as part of their efforts to modernize their IT infrastructure and support their digital transformation initiatives.

How could enterprises in the region make better use of their data, as well as better secure these data?

Ong: The phrase “data is the new oil” has been discussed several times before. But that comparison may not be so accurate these days. If data had to be likened to an energy source it would be more similar to something renewable like wind or solar energy. Data does not diminish nor run out, and it is infinitely reusable. And the more you use it, the better it gets.

Harnessing data effectively will lead to competitive advantage and the push from many organizations towards digitalization is fuelled by this knowledge. But not everyone is getting it right.

Hitachi Vantara’s take is that today only 5% of corporate data has been successfully analysed to drive business value.

Many companies are already missing out on the low-hanging fruit because they do not realise how much data they already have. The data they currently are using is mostly to monitor historic activity, rather than for forecasting and making business decisions. The main issue organizations are facing is that they do not know where the data is, and that is because the data is “everywhere.”

Organisations likely have data pulled from sensors on their machines, CRM data, invoice systems, maintenance systems, enterprise resource planning and the likes.

Pulling all that data together into one manageable data lake can be a challenge, and the more data you have the harder it gets. Data scientists are not able to do their jobs effectively cause so much of their time is spent just making available data actionable.

The answer to the predicament is utilising DataOps. A relatively new methodology, DataOps is enterprise data management for the AI era, centered around automating much of these processes that take up a data scientist’s time.

DataOps is not a product, service or solution, it is a methodology: a technological and cultural change to improve an organisation’s use of data through enhanced collaboration and automation. It aims to ensure that a firm’s data is in the right place, at the right time and accessible to the right people. It is a more agile, more deliberate approach to data science that ultimately enables better insights and better business decisions as a result.

Implementing DataOps is not quite so simple an undertaking, but I would like to share five major points or steps towards realising the full potential of a company’s data.  To start, assess and tune your technology portfolio and processes to remove redundancy and consolidate control within your teams. Then consolidate between your teams to encourage sharing and reduce the inconsistencies that hamper collaboration. Third, integrate DataOps practices across your teams and data pipelines. This is often a difficult stage where collaboration requires your people to use unfamiliar processes and trust other teams that they’ve not worked with before.

By the fourth step, you have aligned your people and it’s time to automate your processes. Automation makes your data pipelines more efficient and your data operations more effective. But you are not done yet. The fifth and last step is giving your data consumers the ability to serve themselves. This is where data quickly becomes information and insight to unshackle the full power of your DataOps, which is now evident across your organization.