What could be the ‘secret sauce’ for data-led digital transformation for organizations in Asia Pacific?

Today, data transformation is no longer just a means to achieve short-term results but is crucial for the organization’s long-term success.

However, with the exponential growth in data that is created, shared, analyzed and stored, the data landscape for many organizations has become highly complex and disparate – containing multiple data assets with overlapping data and capabilities across technology and business units.

Data is critical to many APAC organizations’ digital transformation journeys. What changes in strategy, design and architecture would be necessary to be a successful data-led and digital-driven intelligent organization?

DigiconAsia gained some insights from Ed Lenta, SVP and GM, APJ, Databricks:

Data is critical to digital transformation. However, the data landscape for many organizations is highly complex and disparate. What is the ‘secret sauce’ to delivering data-led digital transformation for organizations today?

Ed Lenta:Data underpins digital transformation – it empowers businesses to identify trends, opportunities, and problems in operations, which can then be addressed to improve overall performance. There are exponentially growing amounts of data thanks to IoT and new digital streams, but many companies fall short of maximizing their data strategies because of siloed systems and the lack of a unified governance model. According to a 2022 Databricks and MIT Technology Review Insights survey, 72% of C-level respondents stress that problems with data management will jeopardize future AI achievement.

In the past, companies had to maintain proprietary data warehouses for BI workloads and data lakes for AI, data science and machine learning workloads, often across multiple cloud platforms. As these platforms are incompatible with each other, this led to complicated, expensive architecture that slowed down organizations’ ability to get value from their data.

The secret sauce to delivering data-led digital transformation for organizations today is the lakehouse architecture. It combines the best qualities of data warehouses and data lakes to provide a single solution for all major data workloads and supports use cases from streaming analytics to BI, data science, and AI. The lakehouse dramatically simplifies organizations’ data platform by unifying all data workloads on a single platform. Databricks invented and pioneered the lakehouse and recently we’ve seen other vendors in our space begin to champion the lakehouse too.


Ed Lenta, SVP and GM, APJ, Databricks

What are some key trends shaping the future of data analytics, especially in the Asia Pacific region?

Ed Lenta: The demand for AI and data analytics is increasing as tougher economic conditions are prompting more companies to adopt software that promises to drive better business decisions – from delivering real-time business insights, enabling customer personalization, preventing fraud, and more.

For example, Grab has over 6 billion transactions on its platform, and the company needed to scale its ability to deliver personalized experiences for transport, food delivery and digital payments. Grab leverages the Databricks Lakehouse to build a Customer360 platform that accurately forecasts consumer needs and preferences. Thousands of customer-centric attributes are stored in Delta Lake, and accessed as the single source of truth by data teams through the lakehouse, so that they can easily collaborate to explore customer data, insights, attributes and customer lifetime value.

The data analytics platform delivers these insights at scale, democratizing data through the rapid deployment of AI and BI use cases across their operations. Today, Grab can now make more personalized recommendations and engineer new features that are better aligned to customer preferences.

In Asia Pacific, spending on big data and analytics is estimated to grow by 19% in 2022 and will rise 1.6 times to US$53.3 billion by 2025 according to IDC. Data and artificial intelligence (AI) are at the forefront of business-critical decisions. Companies know that in order to outpace competitors and delight their customers, they need to look ahead using data in real-time to predict and plan for the future, and not spend time looking back.

That is why we predict that AI and automation is one part of the corporate budget unlikely to see cuts as companies tighten spending amid an economic downturn. Tools that CIOs can provide to the business to find inefficient processes or to simplify operations can help companies ride out a bad economy. CIOs are also looking to use AI without being locked into a given technology platform, and open source data tools can help avoid lock-in.

How do you define a data lakehouse, and how is it different from data warehouses and data lakes? How do the added flexibility and catch-all manner of data lakehouse give enterprise analytics an edge?

Ed Lenta: A data lakehouse is a new form of open data management architecture – pioneered by Databricks – which combines the reliability, strong governance and performance of data warehouses with the openness, flexibility and machine learning support of data lakes. A lakehouse architecture is the ideal data architecture for data-driven organizations. It combines the best qualities of data warehouses and data lakes to provide a single solution for all major data workloads and supports use cases from streaming analytics to BI, data science, and AI. This means that data teams can move faster as they are able to use data without needing to access multiple systems. Data lakehouses also ensure that teams have the most complete and up-to-date data available for data science, machine learning, and business analytics projects.

Data warehouses have a long history in decision support and business intelligence applications since its inception in the late 1980s. But while warehouses were great for structured data, modern enterprises now have to deal with a large amount of unstructured data such as images, video, audio and texts, which the data warehouses were not built for.

Therefore, data lakes were developed in response to the limitations of data warehouses; data lakes are able to process all data types including unstructured and semi-structured data which are critical for AI, machine learning and advanced analytics use cases. However, many of the promises of data lakes have not been realized due to the lack of some critical features: no support for transactions, no enforcement of data quality or governance, and poor performance optimizations.

A common approach is to use multiple systems – a data lake, several data warehouses, and other specialized systems. Having a multitude of systems introduces complexity and more importantly, introduces delay as data professionals invariably need to move or copy data between different systems.

The data lakehouse was created to solve the complex and outdated legacy infrastructure that cannot deliver on the promise of AI. It is a new data management architecture that radically simplifies enterprise data infrastructure and accelerates innovation in an age when machine learning is poised to disrupt every industry.

One of the companies that leveraged Databricks Lakehouse to spur business growth with a cost effective and resilient modern data platform is Shift, a fast-growing fintech company making it simple and convenient for Australian businesses to access capital. Speed is central to Shift’s business goals, but processing large volumes of banking data and customer records was slowing the company down.

By implementing Databricks Lakehouse into their technology stack, Shift has centralized its data sources in one unified, scalable place to uncover meaningful insights more efficiently. And with the ability to expedite the full machine learning lifecycle on the data lakehouse, Shift can process data 90% faster than before, increasing its time-to-market by 24 times for new solutions while boosting its predictive capabilities.

Does size matter? Would a data lakehouse be necessary for startups and SMBs, or would the benefits be less obvious compared to larger companies?

Ed Lenta: The lakehouse can transform businesses of any size – from start-ups, SMBs to large MNCs. Businesses are moving away from warehouses, on-premise software, and other legacy infrastructure to the lakehouse. To achieve greater speed to market, they are also adopting ready-to-use platforms instead of building everything in-house from the ground up. They’ve realized that in order to scale quickly, they need technology that is agile enough to manage the volume of data that comes with growth.

We’ve seen many digital native companies from across the world and in Asia Pacific leverage the Databricks Lakehouse platform to scale up their business and spur growth through data-driven decision making. These include renowned companies such as Atlassian, Grab, Meesho, Dream11 and Bigtincan, to up-and-coming start-ups such as Shift and Hivery. By implementing a data lakehouse as part of their modern data stack, digital native businesses have shown they can scale for growth and stay uniquely connected to their customers by putting data in the hands of every team member.