Floating Button
Home Digitaledge In Focus

A call for explainable AI

Sean Duca
Sean Duca • 4 min read
A call for explainable AI
The challenge lies in maintaining an equilibrium between transparency, accountability, and protecting proprietary interests. Photo: Pexels
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Our society is populated by artificial entities known as the programmable race. While their presence is often apparent, comprehension sometimes takes a back seat. They serve us in customer service roles, engage with us in video games, and inundate our personalised social media feeds. Today, they have even infiltrated our financial lives, using artificial intelligence (AI) tools like ChatGPT to trade stocks and make investment decisions.

However, the consensus and opacity surrounding these AI tools mean that their output is only as reliable as the variables that govern them. In this vast and intricate landscape, the transparency and quality of data and algorithms guiding these technologies hold the utmost importance. Inadequate attention to critical factors such as trust and quality can result in inherent bias, misinformation, and susceptibility to manipulation by malicious actors. Therefore, we must improve our ability to understand the inner workings of these tools and the reasons behind their actions.

Singapore will pilot a programme for regulatory oversight of AI to address concerns that it discriminates against specific populations. Transparency and Accountability emerge as central tenets as we probe the data fuelling the algorithmic revolution. Critics misinterpret this as a veiled call to reveal intellectual property. But a nuanced examination unravels a more complex narrative.

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2026 The Edge Publishing Pte Ltd. All rights reserved.