A survey by Gartner, a research and advisory firm, last month revealed that 55% of organisations are in piloting or production mode with generative AI.
Nearly half (45%) are scaling generative AI investments across multiple business functions. Software development, marketing and customer service are among the functions with the highest rate of adoption or investment in generative AI.
“Fifty-five per cent of organisations reported increasing investment in generative AI since it surged into the public domain 10 months ago,” says Frances Karamouzis, distinguished VP analyst at Gartner. Growth initiatives were cited as the primary business focus of generative AI investments (30%), followed by cost optimisation (26%) and customer experience/retention (24%).
“Generative AI is now on CEOs’ and boards’ agendas as they seek to take advantage of the transformative potential of this technology. Business and IT leaders understand that the ‘wait and see’ approach is riskier than investing,” adds Karamouzis.
The need to embrace AI TRiSM
As organisations increasingly incorporate generative AI and other forms of artificial intelligence (AI) into their operations, they will need to ensure they are not introducing more or new risks. This is because AI poses considerable data risks as sensitive datasets are often used to train AI models. The accuracy of model outputs and the quality of the data sets might also vary over time, which can cause adverse consequences.
See also: Tesla Cybertruck to go on tour in China to burnish tech cred
Gartner believes that having an enterprise-wide strategy for AI trust, risk and security management (AI TRiSM) will enable organisations to understand what their AI models are doing, how well they align with the original intentions and what can be expected in terms of performance and business value.
“AI requires new forms of trust, risk and security management that conventional controls don’t provide. Chief information security officers (CISOs) need to champion AI TRiSM to improve AI results by, for example, increasing the speed of AI model-to-production, enabling better governance or rationalising AI model portfolio, This can eliminate up to 80% of faulty and illegitimate information," advises Mark Horvath, VP analyst at Gartner.
However, AI TRiSM cannot be led by a single business unit. “CISOs must have a clear understanding of their AI responsibilities within the broader dedicated AI teams, which can include staff from the legal, compliance and IT and data analytics teams,” says Jeremy D’Hoinne, VP analyst at Gartner.
See also: Samsung races Apple to develop blood sugar monitor that doesn't break skin
Since AI may be seen as any other application, CISOs might need to recalibrate expectations within and outside of the team. Once that is done, CISOs and their teams need to take the following five AI risk management actions:
- Capture the extent of exposure by inventorying AI used in the organisation and ensure the right level of explainability.
- Drive staff awareness across the organisation by leading a formal AI risk education campaign.
- Support model reliability, trustworthiness and security by incorporating risk management into model operations.
- Eliminate exposures of internal and shared AI data by adopting data protection and privacy programmes.
- Adopt specific AI security measures against adversarial attacks to ensure resistance and resilience.