Data fabrics free up the data to be innovated at source while liberating data for better discovery, management and governance.

In tandem with the rise of the digital economy, business organizations have had to accelerate and strategize to identify, capture and optimize the use of data in business decision-making. 

The current global health crisis has proven to be a catalyst in putting data front and center in the way businesses operate today, and processes and systems are fast being streamlined and data-dependent and data-driven.

However, with valuable data existing in varied formats and quality levels, the rise of data sprawl—the overwhelming amount and variety of data produced—has posed a real challenge for many organizations. That is, organizations often struggle to efficiently convert and make full potential use of their often-siloed data and turning it into intelligible insights.

According to a recent IDC report, more than 80% of IT leaders reported data sprawl as one of the most critical problems they face today. Nonetheless, data management has evolved significantly since the 1970s, when we first stored and accessed data by interpreting it in different spindles and punch cards. Here, I introduce the concept of logical data fabrics as the modern solution to data sprawl. The technique that allows data to be more efficiently converted and used for business purposes.

The Logical Data Fabric

To enable business organizations to achieve a unified view of information across applications and databases, computer scientists have created centralized repositories for storing data, starting from the concept of data warehouses in the 1990s and graduating to data lakes in the past decade.

This approach, while attempting to go against data gravity—which pulls the data towards its sources and away from central repository—works only for some period of time. Scientists have realized: data that is quickly moved apart into decentralized components as new data, continuously sprouted in other locations.

In due course, rather than forcefully consolidating the data into a central physical repository, a new data management paradigm called the logical data fabric has emerged. Similar to the weaving of textile fibers to produce cloth, the logical data fabric knits a virtual view of data across applications by leaving data at the sources where they are created, while allowing a unified view of all enterprise data.

What makes the data fabric successful is that data virtualization forms its core technology and many of its capabilities are automated through AI and Machine Learning (ML).

As organizations increasingly store data in multiple cloud-based platforms, which can add to existing on-premises data silo problems—the concept of a data fabric has increased in importance.

Stop collecting and start connecting

Specifically, logical data fabric enables organizations to stop ‘collecting’ their data into a central repository and start ‘connecting’ to the data at the sources through data virtualization. Regardless of the data’s location—on-premises or in the cloud; format—structured or unstructured; or latency—data in motion or data in rest, data fabrics free up the data to be innovated at its sources while bringing it together in a virtual fashion for the benefits of data discovery, management, and governance. It hence operates according to the concept of data gravity.

This way, logical data fabric greatly improves the efficacy of business data users. For one, without needing to move data from its source into a temporary repository, IT teams no longer need to program ETL scripts for the purposes of converting data before loading it into target systems. Data virtualization can perform transformations on the fly, which saves on storage costs.

Furthermore, data virtualization’s low- to no-code approach significantly reduces the manpower and effort involved to technologically develop the unified views that organizations demand.

More importantly, it serves as a virtual catalog for all data assets within the enterprise, including the sources of data origins, its format, and its relationship and association with other data assets. Through this catalogue, business users are not required to access different systems to perform actions such as data discovery or documenting business definitions for the purposes of data governance.

In addition, logical data fabric possesses powerful data preparation capabilities to normalize data formats and simplify them for business consumption. Business users will then be able to easily access the data within their favorite analytical, operational, web, or mobile applications.

Enhancing the data fabric with AI/ML

As mentioned above, the logical data fabric is already embracing AI and ML to automate some routine tasks, and this is set to intensify in future. At most business organizations, data is constantly evolving, new data sources are routinely added, and new forms of data are being innovated.

AI and ML analyze changing data patterns and automatically integrate new data for unified views, delivering them in the most appropriate formats to business users. The incorporation of AI and ML technologies allows organizations to improve their understanding of the consumption behavior of certain users and to share new data sets for further analysis and collaboration.

Other AI and ML benefits include raising efficiency and productivity levels among data scientists working on related projects. These data fabrics will enable them to optimize data models for deeper analytics and insights.

Thanks to the tremendous benefits outlined above, the logical data fabric is no longer just a concept: its adoption has been gaining speed. By adhering to the principles of data gravity, providing business-friendly views of enterprise data in its entirety, and automating data monetization with AI and ML, data fabrics will be one of the hottest data management trends in 2020 and beyond.