Artificial Intelligence (AI) is not new, and its use brings some element of reputational risk. While corporations have been using AI for some time now, it has been largely unseen in data analytics, predicting customer behaviour, sales and marketing, or operations. Most of the time, clients and customers would not see the touch of AI in a corporation’s work.
For instance, a manufacturing company might use machine learning to collect and analyse an immense amount of data, and then identify patterns and anomalies for which the company can use to make decisions about improving operations. AI can also be leveraged to repair faulty machinery where factory workers send an image of a machine to an AI programme, which then detects issues or defects in installation. As a customer of this manufacturing company, you would probably never see this in the works.
That is set to change with ChatGPT, which uses generative AI as a language model to answer questions and assist with tasks. Its uses now are varied -- students might use it to write an essay, or a software engineer to code, a traveller to plan an itinerary, and some are already using it as a search engine. These ‘new’ uses have prompted many companies to jump on this bandwagon.
Forbes reported that Meta, Canva, and Shopify are using ChatGPT to answer customer questions. Closer to home, Singaporean civil servants may soon harness the power of ChatGPT to conduct research and draft reports and speeches. Under this Pair programme, up to 90,000 civil servants will be able to tap into ChatGPT’s generative capabilities built into their go-to writing platform, Microsoft Word.
As part of its evolution, CNBC reported that Microsoft is planning to release technology for big companies to launch their own chatbots using the OpenAI ChatGPT technology. That will potentially be billions of people interacting with ChatGPT.
It seems like a perfect partnership, a natural next step for the technology.
See also: Keys to achieving human-centred automation testing
It cuts both ways
Not everyone has jumped onto this tempting bandwagon. Some of the most AI-proficient organisations in the world are treading with caution, and for good reason.
As impressive as Large Language Models like ChatGPT have proved so far, it’s still rife with well-known problems. They have a tendency to amplify social biases, often negatively against women and people of colour. They are riddled with loopholes — users found that they could circumvent ChatGPT’s safety guidelines, which are supposed to stop it from providing dangerous information, by asking it to simply imagine it’s a bad AI.
See also: Human element still important for effective mass communication
In other words, ChatGPT-like AI is fraught with reputational risk.
Harnessing technology with a healthy reputational risk mindset
That doesn’t mean we have to totally dismiss AI like ChatGPT. Adopting new technology of any sort is bound to come with risks. So how do we reap the benefits of AI whilst maintaining a healthy level of reputational risk?
The Reputation, Crisis and Resilience (RCR) team at Deloitte held a roundtable with leaders in financial services, technology, and healthcare industries to discuss how they approach the complex challenge of managing reputation risk. Some of the points concluded were:
- Foster a reputation-intelligent culture: One of key things discussed was creating a culture that is sensitive to brand and reputation. In every decision made, employees should have an internal compass that constantly asks: will this move the needle on the company’s reputation, and how? This can be cultivated through holistic onboarding and training programmes.
- Set a reputation risk tolerance: Setting a tolerance can help organisations make intentional decisions. No company wants to take a reputational hit, but few companies actually set tolerance levels for how much risk they want to take. When you have a threshold to stay within, it’s easier to deal with new technologies you might not understand fully.
- Utilise reputation risk management: Measurement methods include regular surveys, media monitoring, and key opinion research. However, leaders must find a balance between collecting relevant data without drowning in it. Research shows that too much data collection can be counterproductive, distracting people from the bigger picture or creating a risk-averse attitude.
Specifically in the realm of AI, Singapore has taken steps to build a governance testing framework and software toolkit, AI Verify Foundation. AI Verify will assist enterprises in objectively proving responsible AI through standardised testing, generating reports that evaluate various governance principles for an AI system. This will not only help organisations identify potential safety and reputational risks embedded in their frameworks but also enable them to be more transparent about their AI usage by sharing reports with their stakeholders.
As AI continues to develop very quickly, knowing its intricate depths and breadths all the time will be difficult. While we should keep abreast, what’s more important is focusing on cultivating a strong mindset around reputational risk so that no matter the tool — AI, social media, cryptocurrency — we can always manage the risk involved. For instance, instead of concentrating all effort and focus towards the dangers of a kitchen knife and how it might hurt you, instead, learn about the general guidelines of personal kitchen safety, be it from the sharp edge of a knife or a pan-fire.
Similarly, instead of concentrating on the latest technological marvel and learning about every single reputational risk that might come with it, build a robust reputational mindset instead—one that will weather your organisation through any risky business, and where any new technology can easily fit into the framework you’ve developed.
Conall McDevitt is the managing partner of Europe and Asia for the Penta Group