Generative AI has been the talk of the town this year. Set to be a game-changer across industries, generative AI could add up to US$4.4 trillion ($5.88 trillion) in value annually to the global economy, according to management consulting firm McKinsey & Company.
Organisations experimenting with the technology tend to use it in areas that bring the most value to the business such as marketing and sales, product and service development, and service operations including customer care.
For instance, generative AI can be combined with digital twins to further improve operational efficiency, reduce cost and accelerate product development processes. “[Generative AI can produce] on-demand, self-service, or automated data using algorithms or rules to meet conditions lacking in data collected from the real world. Those synthetic data reproduce the same statistical properties, probability, and characteristics of the real-world dataset from which the synthetic data is trained. This is a privacy preservation technique,” explains Deepak Ramanathan, vice president for the Global Technology Practice at SAS Institute, a business analytics software and services firm.
He continues: “Meanwhile, a digital twin is a digital, animate, dynamic ecosystem — comprised of an interconnected network of software, generative and non-generative models, and (historical, real-time, and synthetic) data — that both mirrors and synchronises with a physical system. [By combining synthetic data with digital twins, organisations can better] simulate ‘what-if’ scenarios and stress test systems in the digital world to prescribe actions that optimise the physical world such as improving the lives of individuals, populations, cities, organisations, the environment, systems, products, and more.”
The transformative value of generative AI can only be drawn when it is used across the entire organisation, instead of a few departments. This calls for an enhanced IT backbone — one that can provide flexibility and effectively support generative AI’s technical requirements. Here are some areas organisations should look at to achieve that.
See also: Keys to achieving human-centred automation testing
Enabling multimodal (generative) AI
For generative AI to be applicable to different use cases, there will not be an all-encompassing generative AI solution.
See also: Human element still important for effective mass communication
“No single model does everything. [This is why Amazon Web Services (AWS)] provide customers with the choice and flexibility through Amazon Bedrock, a service that makes it easy to build and scale generative AI applications with foundation models (FMs) from leading AI companies including AI21, Anthropic, Stability AI, and Amazon Titan. With Amazon Bedrock, customers can easily experiment with a variety of top FMs and customise them privately with their proprietary data,” says Priscilla Chong, country manager of AWS Singapore.
She adds: “Our customers care deeply about the provenance of their data because in a lot of cases, that data is their customers’ data. They often don’t want that data to be used to train an external model hosted by someone else as they can’t assure customers about provenance. We’ve put a lot of effort into making sure that when customers finetune their FMs in Amazon Bedrock, the tuning happens to a private copy of the model that is owned by the customer.”
Besides ensuring data privacy, organisations must also have a robust data strategy as data is the source of truth and differentiator for their AI models. A data lake can help by enabling organisations to lean into the vast amount of data that is centralised to develop new business models and unlock new productivity gains. Chong states: “Data has gravity, and you can’t extract insights from data that is stored in siloes. To fine-tune generative AI models, businesses need quality datasets, and they need to understand the business potential that generative AI can unlock from their data.”
Cloud or on-premises?
The high cost of training and running generative AI models may slow the adoption of generative AI at scale. CNBC reported earlier this year that analysts and technologists estimated the critical process of training a large language model such as OpenAI’s GPT-3 could cost more than US$4 million.
Chong believes running generative AI apps on the cloud like AWS can address that as the hyperscaler has been investing in its own silicon to push the envelope on performance and price for demanding workloads like machine learning training and inference.
“Imagine silicon chips as the engine in a high-performance sports car. The engine’s powerful mechanics allow the car to achieve incredible speeds and handle complex manoeuvres with precision. Similarly, silicon chips provide the computational muscle necessary to process vast amounts of data for complex problem-solving. [The latest] AWS Trainium and AWS Inferentia chips offer the lowest cost for training models and running inference in the cloud,” she says.
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Meanwhile, Han Chon, managing director of Asean, Taiwan, Hong Kong and Korea at hybrid multi-cloud computing firm Nutanix, foresees some AI workloads to be more cost-effective on-premises.
He explains: “Generative AI requires substantial computational power and is a significant contributor to escalating cloud bills. To address this, organisations need to ensure they can scale compute and storage predictably and non-disruptively in response to demand, which will allow for lower cloud costs and make cloud computing more economically sustainable for AI-intensive tasks.”
Matthew Oostveen, chief technology officer and vice-president for Asia Pacific and Japan at Pure Storage, agrees on the need for robust and scalable storage. He foresees organisations increasingly turning to all-flash storage solutions as they can provide high-performance, low-latency storage for rapid data access and retrieval in mission-critical applications and data-intensive workloads for AI.
“Deduplication (Dedup), non-disruptive upgrades and non-volatile memory express over fabrics (NVMe-oF) features will define the future of storage in the new AI era, where efficient data management and processing hinges on optimal storage performance and scalability. Customers are also considering policy-driven upgrades, which strike the right balance between frequent upgrades and maintaining a secure and supported storage environment based on their goals. Also, the energy-intensive nature of AI workloads will drive data centres to explore efficient cooling technology for effective power and rack space allocation, leading to the rise of the all-flash data centre,” he adds.
Nutanix’s Chon also expects to see generative AI being increasingly run at the network’s edge. Processing AI data closer to the data source will not only reduce the cost of running AI workloads in the public cloud but also reduce latency and ensure a resilient infrastructure.
Since organisations might run generative AI on several environments in the future, a unified operating model — spanning from the public cloud to the edge — can help facilitate AI initiatives more effectively.
For instance, the Nutanix GPT-in-a-Box delivers ready-to-use customer-controlled AI infrastructure for both edge and core data centre deployments. “It empowers customers to run and fine-tune AI models while maintaining control over their data, addressing their data security and intellectual property protection concerns. This approach future-proofs their infrastructure, allowing them to adapt to evolving AI trends and emerging technologies while ensuring the security and privacy of their data,” says Chon.
Network transformation
Network transformation is another area organisations should look into if they plan to build and run their own generative AI solutions.
Tay Bee Kheng, president for Asean at Cisco, shares that high-performing generative AI models need compute power (such as graphic processing units/GPUs) to process large volumes of data, as well as networks with the right levels of latency, throughput, and scalability to transfer those large amounts of data at high speeds and in a deterministic manner to and from the GPU resources.
Moreover, they require a high level of preparedness from a cybersecurity perspective to keep the data and AI workloads secure, and the ability to meet increased demand for power consumption to support this.
“A high-bandwidth Ethernet infrastructure is essential to facilitate quick data transfer between AI workloads. Implementing software controls like Priority Flow Control (PFC) and Explicit Congestion Notification (ECN) in the Ethernet network guarantees uninterrupted data delivery, especially for latency-sensitive AI workloads,” states Tay.
She continues: “Companies should think about deploying tools and technologies that can provide higher network bandwidth, better performance and scale, and consume less power. This will not only help them leverage the power of AI tools but also deliver on their equally important sustainability goals.
Cisco anticipated this in 2019 when we started building a new class of networking chips we call Cisco Silicon One, designed to power enormous AI workloads. A Cisco 8201 router powered by the chip consumes 96% less energy and provides 35% more bandwidth than previous products without the chip.”
Generative AI can also help organisations manage their networks more easily and efficiently. Juniper Networks, for example, recently enhanced its Marvis chatbot with OpenAI’s ChatGPT. “Marvis was previously already capable of resolving 90% of support tickets related to technical network issues, being able to identify the root cause of malfunctions across multiple types of infrastructures.
“With generative AI tools integrated, administrators now have access to an even higher degree of actionable knowledge, and the conversational interface adds great value to how IT teams operate. [In short,] enterprises that adopt AI-native networking can not only anticipate and mitigate network congestion but also pre-emptively identify potential service disruptions and forecast customer demands,” says Lee Ming Kai, Juniper Network’s vice president of Sales Engineering for Asia Pacific.
Tay shares the same view that generative AI should be used to automate network management. “Automation reduces manual intervention, improves efficiency and allows the infrastructure to dynamically adapt to the demands of AI workloads.”
She gave the example of how the use of policy assistants powered by large language models will enable IT administrators to give commands in natural language (instead of writing new code) to deploy and implement new policies. Leveraging automated infrastructure as code, generative AI can also allow organisations to better establish and dynamically adjust their networks while providing AI-assisted network operations and remediation.
Impact on cybersecurity
In the cybersecurity space, generative AI is a double-edged sword. It introduces new attack surfaces and potential exploitation points in AI models and related data infrastructure.
“Additionally, bad actors exploit the same tools to automate and enhance their attack strategies, such as creating sophisticated and convincing phishing emails, deep fake videos and even malware. These AI-generated threats can be difficult to detect using traditional security measures, making it a concern for cybersecurity experts,” states Steven Scheurmann, regional vice-president for Asean at cybersecurity firm Palo Alto Networks.
The good news is that generative AI can automate manual cybersecurity processes often taken on by humans, making it possible to prevent more risks from turning into security incidents.
Scheurmann says: “With good and high-quality data, AI will become smarter and more effective in understanding and assessing possible scenarios. Coupled with AI, cybersecurity teams are better equipped to ensure the appropriate contextual application is layered on top of the analysis and respond to threat scenarios in near real time.”
The automation capability enabled by generative AI is also key in helping organisations address existing skills gaps and talent shortages in their cybersecurity teams, shares Simon Davies, senior vice president and general manager for Apac at Splunk, a unified security and observability platform provider.
“Rather than replacing jobs, generative AI is poised to streamline labour-intensive security functions, allowing professionals to focus on more intricate and strategic aspects of cybersecurity. For example, the use of machine learning within their threat detection approaches enables cybersecurity teams to automate the filtering of false positives, leading to more efficient and accurate identification of suspicious activity patterns.”
AI, he adds, has a home in the observability toolset too, with 91% of the respondents of Splunk’s State of Observability 2023 report citing AIOps as an important enabler of their observability goals. AIOps refers to platforms that leverage machine learning and analytics to automate IT operations.
“For organisations to truly drive resilience for their digital systems, they must look toward unifying their security and observability processes to gain unparalleled visibility across their hybrid environments.”
“Users should not view AI as a fully independent agent, but rather as a capable teammate. By integrating the technology with ‘human-in-the-loop’ experiences, organisations can achieve faster detection, investigation and response while maintaining control over how AI is applied to their data — ensuring users continue to be in the driver’s seat when leveraging AI use,” asserts Davies.
According to the Cisco AI Readiness Index, 67% of respondents in Southeast Asia believe they have a maximum of one year to deploy their AI strategy and leverage it or else it will hurt their business. Organisations in the region must therefore enhance their IT backbone quickly as they increasingly build and adopt generative AI applications or other AI solutions to futureproof themselves.