Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

The key to effective AI regulation: Collaboration

Brice Chambraud
Brice Chambraud • 6 min read
The key to effective AI regulation: Collaboration
Tackling the topic of artificial intelligence regulation often ends with more questions than answers. So how can we go about it? Photo: Pexels
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Regulating technology – especially internet-based technology – has been a slow, gruelling, and complex, uphill battle. Though there have been strides made in protecting children and personal data, enforcing copyright, ensuring neutrality, and adjusting existing laws to include crimes that occur in cyberspace, regulating the ever-evolving way people use technology is a major challenge – especially when bad actors are involved.

It may seem cliché to compare the internet to the Wild West, but it is also apt. Advances are made at breakneck speed and information travels even faster. This unmatched ability to create and innovate at scale can be both to the benefit and detriment of society – and because of the benefits the internet and technology offer, regulation is a tricky topic. How do governments, companies, and organisations regulate technologies that can do so much good and spread critical information without hindering it?

With the rise of generative AI – an artificial intelligence technology that is able to produce text-based content, images, videos, audio, and synthetic data at scale – harmful disinformation and misinformation campaigns with the power to influence global public opinion like nothing we have seen before are being unleashed across social media platforms. Now is the time to take regulation seriously. And it is heartening to see that many governments are doing just that.

However, the task is a difficult one, and one that cannot be tackled hastily, despite the need for it to happen as quickly as possible.

The complexity of AI regulation

While discussions about AI regulation are happening worldwide, regulators in Southeast Asia are hoping to move as quickly as possible to create a framework as well as tools that will help the region use AI responsibly moving forward.

In February, ministers from the Association of Southeast Asian Nations (ASEAN) prioritised the development of regional “AI guide,” which they hope to have drafted by the end of 2023. And in Singapore, the IMDA recently established the AI Verify Foundation, which aims to use contributions from the global open-source community to develop a testing tool that enables the responsible use of AI, in the hopes of boosting AI-testing capabilities to meet the needs of companies and regulators globally.

However, whether such a tool will be effective in fighting ill-intentioned uses of generative AI – such as the undermining of elections through widespread, AI-powered misinformation and disinformation campaigns, a concern raised by OpenAI CEO Samuel Altman – is still to be seen.

It is at least a step forward – and in the right direction.

Though regulating technology is always a challenge, tackling the topic of artificial intelligence regulation often ends with more questions than answers. Among them are three major questions: attribution, cross-border considerations, and development.

The attribution question

A mayor of a town in Australia is considering a defamation suit against OpenAI and its genAI-powered tool, ChatGPT, after the bot made false claims about his involvement in exposing a bribery scandal, painting him to be the perpetrator rather than the whistleblower. Despite their best intentions, more lawsuits against AI operators are likely to follow.

See also: Keys to achieving human-centred automation testing

Perhaps the biggest question when discussing regulations around generative AI is who is responsible when AI is used maliciously? Different people will have different immediate answers to this question, ranging from the creators to the operators, to the training data to the entire system. It’s difficult to assign blame, which in turn makes it a massive challenge to even determine who should be held accountable, let alone how to regulate the technology.

The cross-border question

Aside from attribution, regulation becomes even trickier when considering the cross-border nature of technology. Content created or manipulated by someone using AI tools in one nation can have a harmful impact on another – especially when exploiting high-profile political matters.

In 2020, during clashes between soldiers at the border of India and China, there were swaths of misleading viral videos and images shared, ranging from claims about an out-of-context video of soldiers crying to those about an older video of Indian soldiers dancing to Punjabi music played over Chinese military speakers, and an image of a large Chinese speaker that was said to be so loud Indian soldiers’ eardrums were bursting. These pieces of misinformation hugely impacted public opinion, amplifying the discord between both nations.

Content that moves across borders massively complicates the ability to regulate, let alone enforce regulations.

The development question
Perhaps the biggest concern of all when it comes to regulating AI is considerations surrounding development. Generative AI technologies advance and evolve rapidly, putting cybersecurity and cyber risk management companies, as well as regulatory bodies on the backfoot – rather than creating progressive, forward-looking products and regulations, they can only react to new models and uses.

This is such a concern that many tech leaders have signed an open letter calling for a pause in AI development in addition to calls around transparency for AI systems, especially those used in the healthcare and criminal justice industries.

Despite industry concerns, developers will not be hitting the pause button on all things AI so the rest of the industry and regulators can catch up. Because of this, more organisations need to empower themselves by adopting risk management technology that can monitor, analyse and mitigate the impact of AI-generated content. By tracking how far and fast narratives containing manipulated images spread, it is possible to effectively tackle this new generation of threats and successfully stay ahead of fast-evolving technology designed to shift human perception.

See also: Human element still important for effective mass communication

The key to effective regulation

The pace at which AI solutions develop means that to regulate this field effectively, regulations will likely need to be broad, flexible, and easily adaptable in order to keep up with technological innovations.

But the real key to crafting and enforcing effective regulation is very likely collaborative governance. While regional governing bodies like Asean face unique vulnerabilities and considerations, including the enormous regional population with varying levels of digital literacy in addition to different political systems, cultures, and languages, crafting effective region-wide regulations that address the real threat of AI-generated malicious content can help solve cross-border challenges in enforcement.

By working together, either regionally or even globally, governments can create regulations that help to enact real consequences for those using AI maliciously, demand transparency and legitimacy, and foster trust and accountability.

Brice Chambraud is the VP Global Operations and Managing Director APAC at Blackbird.AI

Highlights

Re test Testing QA Spotlight
1000th issue

Re test Testing QA Spotlight

Get the latest news updates in your mailbox
Never miss out on important financial news and get daily updates today
×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.