Bard and ChatGPT are some examples of generative AI that have recently gained the attention of many, regardless of their level of tech-savviness. While the common application for generative AI is to power chatbots for customer service, the technology can create value in many use cases.
McKinsey and Co estimates that generative AI could add the equivalent of US$2.6 trillion to US$4.4 trillion ($3.55 trillion to $6.01 trillion) to the global economy annually across 63 use cases. This estimate would roughly double if we include the impact of embedding generative AI into software currently used for tasks beyond those use cases.
Philip Moyer, vice president of Global AI Business and Solutions at Google Cloud, is of a similar view. Banks, for instance, could use generative AI to analyse customer data, assess risk and offer tailored suggestions to help with loan underwriting. He also shares that construction firms are looking to use generative AI to identify possible safety hazards at a job site and code violations, and to understand local permitting processes.
Moyer oversees the global commercialisation and enterprise strategy for Google Cloud's ML (machine learning) Systems and AI engineering teams. Previously, led strategic industries for Google Cloud, including financial services, healthcare, telecommunications, retail, media and gaming. He shares his observations on generative AI with DigitalEdge in an interview on the sidelines of the Google Cloud Next 2023 event in San Francisco.
Since generative AI can benefit organisations in many ways, why is AI adoption still slow?
I think many companies let great be the enemy of good, where they want everything to be perfect when they kick off an AI project.
See also: Becoming an adaptive leader in the age of technology
From my observation, organisations that have gone ahead with generative AI usually start with small and targeted use cases. They pick an individual and explore how generative AI can make that person more productive. For example, a construction company may look at how a permit officer can understand local permits more quickly by using AI. Thereafter, the company can iterate multiple AI use cases around that permit officer. That way, the company focuses on getting ROI (return on investment) on the individual instead of the entire company.
HCA Healthcare in the US, for example, is piloting a solution that enables key medical information from conversations during patient visits to be documented more quickly and easily. Emergency room physicians use their hands-free devices with an app built by Augmedix that securely creates draft clinical notes automatically after each patient visit.
Augmedix’s proprietary platform then leverages natural language processing — along with Google Cloud’s generative AI technology and multi-party medical speech-to-text processing — to instantly convert the data into medical notes, which physicians review and finalise before they are transferred in real-time to the hospital’s electronic health record. This eliminates the need for manual entry or dictation, freeing up time to focus on patient care.
See also: Board members in Singapore feel least prepared to cope with cyberattacks
Most senior management and board members tend to be concerned about the ROI of projects on the entire business. So how should tech leaders convince them that generative AI projects should first focus on ROI on the individual?
One way is to emphasise that the goal of their generative AI projects is to improve each employee’s effectiveness by a certain percentage. By getting rid of 10% to 20% of the drudgery work for a specific employee by using AI, they will be able to be more productive. If this continues, the company could ultimately give a day back to every employee from the time savings.
Productivity improvements do not happen through big AI projects, but small, iterative ones. This is why we have rolled out a broad array of tools infused with generative AI that can be used by various end-users. Duet AI in Google Meet, for instance, can capture notes, action items and video snippets in real-time, as well as send a summary to attendees after the meeting. Think about how much time is saved from manually summarising the meeting minutes.
There are concerns about AI being a black box as we are unable to see how an AI system derives at its conclusions which could have reliability, regulatory and ethical impact. How is Google Cloud addressing the need for explainable AI?
We are addressing it in a number of ways. First is the citation concept for enterprise search, wherein our generative AI solution cites where it got the requested information from. Especially for regulated industries, it is important to be able to show where you get your data from.
Secondly, we have capabilities in which users can tell our generative AI solutions not to modify results. If I am looking for information from a document on the interactions between two drugs, I do not want the generative AI model to modify that data. Our generative AI model can extract that information and put it into a sentence verbatim, so it delivers precise extraction and citation.
We are also doing a lot of work with customers right now to make it easy to be able to do prompt management. We are capturing the responses of deployed generative AI models to compare them to those originally trained responses on an ongoing basis. We have technologies that allow us to evaluate and monitor those answers so that customers can keep the model on tasks, because models can drift (wherein they become less accurate) and be influenced by real-world factors (like humans) to do things they are not supposed to do.
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Besides that, we are helping organisations adopt the retrieval-augmented generation method to optimise the output of a large language model with targeted information without modifying the underlying model itself. We are doing so by offering plug-ins to add third-party data sources like MSCI (which provides research-driven insights, tools and indices for investors) so that organisations can ensure the output from our generative AI model is based on accurate information from trusted data sources.
What’s next for AI that excites you?
Domain-specific AI models is one area. Since very large models are extremely expensive to run and hard to keep on task, I am doing a lot of work with organisations that take smaller, domain-specific models like for stock trading or pathology.
Med-PaLM and Sec-PaLM are examples of domain-specific models. Med-PaLM harnesses the power of Google’s large language models, which we have aligned to the medical domain and evaluated using medical exams, medical research and consumer queries. It can therefore generate accurate, helpful, long-form answers to consumer health questions. As for Sec-PaLM, it is trained on security use cases. It can help analyse and explain the behaviour of potentially malicious scripts, and better detect which scripts are actually threats to people and organisations in unprecedented time.
By coupling domain-specific models with large models, organisations can get better financial performance and better accuracy. So, I expect over time that a lot of organisations will want to maintain that kind of model.
Another area that excites me is multimodal AI, which allows multiple types or modes of data — such as video, audio, images and text — to be combined to deliver more insightful conclusions or more precise predictions about real-world problems. Today, we are just touching the tip of the iceberg of multimodal AI.
With multimodal AI, a healthcare company, for example, can merge images of genes and cells with text-based blood disease symptoms to predict how a patient would react to a medical drug/compound. In short, domain-specific models and multimodal AI are on the horizon for AI.