Floating Button
Home Digitaledge In Focus

Evading reputational risk when using AI

Conall McDevitt
Conall McDevitt • 5 min read
Evading reputational risk when using AI
Instead of learning about every AI-related risk, organisations should focus on building a robust reputational mindset. Photo: Unsplash
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Artificial Intelligence (AI) is not new, and its use brings some element of reputational risk. While corporations have been using AI for some time now, it has been largely unseen in data analytics, predicting customer behaviour, sales and marketing, or operations. Most of the time, clients and customers would not see the touch of AI in a corporation’s work.

For instance, a manufacturing company might use machine learning to collect and analyse an immense amount of data, and then identify patterns and anomalies for which the company can use to make decisions about improving operations. AI can also be leveraged to repair faulty machinery where factory workers send an image of a machine to an AI programme, which then detects issues or defects in installation. As a customer of this manufacturing company, you would probably never see this in the works.

That is set to change with ChatGPT, which uses generative AI as a language model to answer questions and assist with tasks. Its uses now are varied – students might use it to write an essay, or a software engineer to code, a traveller to plan an itinerary, and some are already using it as a search engine. These ‘new’ uses have prompted many companies to jump on this bandwagon.

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2026 The Edge Publishing Pte Ltd. All rights reserved.