Singapore will pilot a programme for regulatory oversight of AI to address concerns that it discriminates against specific populations. Transparency and Accountability emerge as central tenets as we probe the data fuelling the algorithmic revolution. Critics misinterpret this as a veiled call to reveal intellectual property. But a nuanced examination unravels a more complex narrative.
Our society is populated by artificial entities known as the programmable race. While their presence is often apparent, comprehension sometimes takes a back seat. They serve us in customer service roles, engage with us in video games, and inundate our personalised social media feeds. Today, they have even infiltrated our financial lives, using artificial intelligence (AI) tools like ChatGPT to trade stocks and make investment decisions.
However, the consensus and opacity surrounding these AI tools mean that their output is only as reliable as the variables that govern them. In this vast and intricate landscape, the transparency and quality of data and algorithms guiding these technologies hold the utmost importance. Inadequate attention to critical factors such as trust and quality can result in inherent bias, misinformation, and susceptibility to manipulation by malicious actors. Therefore, we must improve our ability to understand the inner workings of these tools and the reasons behind their actions.

