Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

Building business agility with data virtualisation

Robin Fong
Robin Fong • 5 min read
Building business agility with data virtualisation
Data virtualisation enables enterprises to operate in more productive and cost-effective ways. Photo: Pexels
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Science-fiction movies have visualised a future world where multiple streams of data can be called upon instantly in real-time. Movies depict savvy characters confidently swiping visualised infographics with fluidity, ease, and confidence. In reality, data in business today is less sci-fi wizardry and more about potent real-time connectivity — how do we thoughtfully and methodically weave together our endless stream of datasets and data sources at the same time to generate business insights, growth and positive outcomes?

We can derive value by designing a way to bridge abstract data with the positive real outcomes of time and financial savings. Likewise, as artificial intelligence (AI) is now embedded into more aspects of modern society, the need for data — across volume and complexity — to create user-friendly machine learning algorithms demands more cohesive ways of connecting and applying data, one that can create value and economic growth. Data virtualisation is the way, enabling enterprises to operate in more productive and cost-effective ways. Enterprises can streamline business workflows with real-time access to data — across varying formats, locations, and deployment models.

A unified approach to data

Enterprises generate, process, and analyse immense volumes of data. Unifying data is challenging, and prolonging this weakness has costly consequences.

In Asia Pacific, the pandemic accelerated digital transformation within the healthcare industry and spotlighted the urgent need for interoperability among information technology systems facing an influx of new patient data created. The healthcare industry generates as much as 30% of the world’s data. Recorded or collected data are often distributed across specialist practices, departments, and healthcare institutions. Data can be better managed as these inefficiencies have cost hospitals over US$300 billion in productivity loss every year.

For one of our healthcare projects, we partnered with a client who provides critical data – in the form of management information and business intelligence, to over 10,000 users via a traditional data warehouse. As the data volume and user demands increased, the client faced challenges in improving data integration speeds, reducing latency, and accommodating new data sources.

See also: Keys to achieving human-centred automation testing

Data virtualisation solved these obstacles, as it enables real-time data integration without physically moving any data. By implementing a new logical data warehouse adjacent to the existing physical data warehouse, the client gained faster access to integrated data and could easily accommodate new data sources and maintenance. This new infrastructure allowed for a more efficient and agile data environment and facilitated several new projects.

These newly enabled projects include an enhanced cancer intelligence platform and a persons-at-risk dataset. There was also a national finance platform that was created, which drew from existing procurement, finance, workforce, and activity data, and presented it in a single finance view, enabling the client to take a more holistic approach.

As institutions and enterprises alike now tackle new problems and challenges with multi-dimensional complexity, a more cohesive, integrative solution to connect data is valuable. A data fabric unifies enterprise-wide data that brings insights, clarity, and potentially new solutions.

See also: Human element still important for effective mass communication

Unlocking self-service analytics

Earlier, we mentioned the tedious process of duplicating datasets and then transforming and mapping datasets from one format to another. This manual process, commonly referred to as data wrangling, is a workflow challenge. However, in today’s circumstances, enterprise IT teams deal with more volumes of data in daily operations. Data wrangling can slow daily work and spill into a legitimate operational expense.

Data virtualisation today enables business intelligence and power business users to conduct self-service analytics in real-time. Business intelligence analysts can access, analyse data, and generate reports without losing contextual information or having to laboriously recreate the context of datasets. Through a web browser-based interface, configurations can be programmed to generate self-service analytics in real-time. This makes the service accessible to business intelligence analysts and data architects alike. Most significantly, this can lead up to an 83% reduction in time-to-value measurements, crucial for leaders — such as the Chief Information Officer (CIO) — in controlling cost and ensuring projects are delivered on time within budgets.

A medical device manufacturer stored its enterprise data across multiple enterprise resource planning systems. The datasets were in different formats, managed by different business functions, and kept across different geographic regions. This manufacturer was collecting and reporting a high volume of operational data. As a consequence, this created duplicated or redundant reporting processes, significantly slowing daily operations.

Our project involved implementing a data fabric layer to enable real-time access to these datasets stored across different systems, formats, and locations. This connectivity to datasets enabled relevant teams to conduct self-service analytics. As a result, turnover time for sales reporting dropped from two business days to two hours. The practical benefits of this project had an immediate impact and led to implementations in other enterprise functions such as quote-to-cash, customer-360-degree views, and master data management.

Today, data virtualisation is creating options; connecting data on an abstract layer can bridge and unlock new data-driven use cases.

Self-service analytics is one instance of data-driven use cases. By speeding up data workflows — streamlining data management practices and connecting business intelligence teams directly to data — enterprises can reduce operational costs and discover new business value in combining data.

To stay ahead of the latest tech trends, click here for DigitalEdge Section

Data as an economy

Growing awareness of data ownership in the consumer-facing news space highlights that data is no longer an abstract business commodity. Data is an asset, and everyone can benefit by maximising its potential value. Connecting data that derives new perspectives, insights, or knowledge to influence decision-making is inherently valuable. Whether it is helping business customers uncover hidden credit risk profiles or being informed consumers, data virtualisation offers ways for us to better interact with our data. Data may not be a futuristic imagination, but its immense value is no science fiction today.

Robin Fong is the regional vice president and general manager for ASEAN & Korea at Denodo

Highlights

Re test Testing QA Spotlight
1000th issue

Re test Testing QA Spotlight

Get the latest news updates in your mailbox
Never miss out on important financial news and get daily updates today
×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.