Continue reading this on our app for a better experience

Open in App
Floating Button

Growing the AI ecosystem

Jeffrey Tan
Jeffrey Tan • 8 min read
Growing the AI ecosystem
As the adoption of artificial intelligence grows, biases need to be tackled and regulations on the ethics and development of its application need to be put in place
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

As the adoption of artificial intelligence grows, biases need to be tackled and regulations on the ethics and development of its application need to be put in place

SINGAPORE (Oct 7): About four years ago, Jutta Treviranus discovered a problem in the way artificial intelligence (AI) was being developed. This happened when she provided consulting services to Ontario’s Ministry of Transportation. Treviranus, director of the Inclusive Design Research Centre (IDRC) and professor in the faculty of Design at OCAD University in Toronto, was asked to help draft policies and frameworks for emerging transportation technologies, such as autonomous vehicles (AVs).

During the consultation, she had the opportunity to run simulations on several AVs. The aim was to test the robustness of their AI by introducing unexpected variables on the road. The outcomes of those simulations were alarming.

In all of the simulations, none of the AI was able to anticipate unexpected variables, leading to tragic results. In particular, the AI did not recognise wheelchair users as pedestrians crossing a junction, but as stationary objects at the side of the road. This led to their getting run over. In some cases, the AI was unable to detect wheelchair users, who owing to limited mobility had to move backwards instead of forward. They, too, got run over.

As Treviranus recounts, however, the developers of the AI told her not to worry about the outcomes because, they claimed, the learning models were still immature. They also assured her that the AI would be made smarter by feeding the learning models with more data, including on pedestrians who are wheelchair-bound.

Yet, in subsequent simulations, the AI still failed to account for wheelchair users. “In fact, the [simulated AVs] ran over my friend who goes backwards on a wheelchair,” she tells The Edge Singapore in an interview. “The learning models were all accounting for wheelchair users going forward because that is expected.”

This incident illustrates the problem of AI bias. The AI is only as smart as the learning model or algorithm allows it to be. This, in turn, is dependent on the quality of data fed into the learning model. The AI learns by identifying dominant patterns and correlations in the data. This allows it to infer or make a decision more efficiently, Treviranus explains.

Unfortunately, this method of learning disregards outlier data. It also does not help that data brokers themselves — in cleaning the data — are removing outlier data to facilitate a quicker learning process for the AI. As a result, the AI is less able to deal with a diversity of potential outcomes. “What it is doing is privileging the average, and not attending to diverse, complex or unexpected scenarios,” says Treviranus. “We are creating a system [that] is reverting to the mean or average all the time.”

This is why the AI installed on the AVs was unable to detect the presence of wheelchair-bound pedestrians, or their movement backwards. The systems were only trained to detect the average pedestrian on the street, who usually walks in a forward direction.

Increasing concern

Given that many businesses and government agencies are starting to leverage AI applications, the need to identify and address AI bias could not be further emphasised. According to a recent survey of 500 IT, customer experience and digital decision makers in organisations here, one in two respondents currently use AI in their business. The top departments in which AI is being broadly implemented are IT (23%), operations (20%) and customer service (19%). In addition, 90% of respondents said they were concerned about the potential for AI to create unequal access for different segments of society or for it to create a bias against certain groups. The survey was conducted by Tech Research Asia, an IT research and consulting firm.

“As AI becomes more widely used, AI bias will be more prevalent,” says Alexander Binder, assistant professor of information systems technology and design at the Singapore University of Technology and Design. “I think it is an issue that should be increasingly considered.”

Another concern is the local preference for an AI programme that has been developed abroad over one that has been developed at home. According to the survey by TRA, 42% of respondents favoured using AI developed by a global tech company, as opposed to 30% who favoured local AI vendors. Foreign-developed AI has a higher risk of being biased against local input, as it is likely to have been trained using data that is not reflective of the local context.

An example is facial recognition, says Leong Tze Yun, director of AI technology at AI Singapore and professor of computer science at the National University of Singapore. “If you import AI that is trained on another population, it will not work well in Singapore because we are a multiracial society,” she tells The Edge Singapore. AI Singapore is a national AI programme launched by the National Research Foundation to grow the local AI talent and ecosystem.

Treviranus is amazed at Singapore’s inclination to adopt foreign solutions. “The thing that surprises me about Singapore is just how enamoured people here seem to be of ideas, technology and innovations that come from elsewhere, when Singapore has the opportunity, resources and knowledge to do something innovative and different,” she says.

Still, that does not mean that locally developed AI is free from bias. Binder says AI recruitment software could be biased against job applicants. For example, an underperforming fresh graduate could be called for an interview at the expense of an outperforming fresh graduate because the AI has determined that the former, who attended a prestigious university, is likely to be a better candidate than the latter, who attended a less prestigious university. “This [bias] is harder to identify because the public has no access to the results of the AI recruitment software,” he says.

AI bias can also appear in bank loan application software, according to Treviranus. For instance, a loan application filed by a person of a certain ethnicity could be rejected on the grounds of high default risk. This is because the AI has learned that the majority of loan defaults are recorded by people of that ethnicity. “So, there won’t be a decision that says: ‘Wait a second. This is an exception,’” she says.

Preventive measures

So, how can AI bias be eliminated, or at least minimised? Treviranus urges that a different approach to how AI is currently developed be taken. Instead of training the AI to identify dominant patterns and correlations in the data, which favour the majority or average, start feeding the learning models with outlying data first, she says.

In other words, from a normal distribution perspective, Treviranus recommends using data from the edges — instead of the middle portion — of the bell curve to train the AI first. “So, rather than beginning with the average [person] that is complacent and has no difficulty, start with people at the edge because they are going to cover much of the terrain. If you start with the edge, you are going to satisfy the middle,” she says. This way, the AI gets a better sense of diversity, she adds.

But that would mean the AI will subsequently be trained using the majority data, which means the learning models will be fed with the entire data population. Wouldn’t this be time consuming and expensive? Treviranus does not think so, as the cost will be relatively cheaper in the long run. By contrast, if the current approach continues, the cost of retraining the AI, fixing bugs and making other contextual changes is higher, she explains.

That aside, Richard Koh, chief technology officer of Microsoft Singapore, says the AI development team should be a cross-functional one. In particular, there should be legal counsel, psychologists, sociologists and other experts, apart from programmers, involved in developing the AI, he elaborates. A diversity in cultures also helps, as different values and perspectives are taken into account. “This lends a stronger assurance of reducing bias,” he tells The Edge Singapore in an interview.

On a broader level, Leong says regulations on AI ethics and development need to be put in place. But the jury is still out on who should take responsibility for the enactment and enforcement of those regulations.

According to the TRA survey, 41% of respondents say the responsibility of regulation lies with the Singaporean government, while 33% think industry bodies and associations should be responsible. A further 30% say global technology companies should bear responsibility. Interestingly, only 39% of businesses think it is “extremely important” that AI systems comply with all relevant international and Singaporean regulations.

The way Leong sees it, this needs to be a multi-concerted effort that involves every stakeholder regardless of national boundaries. “Governments, policymakers, academics and industry players need to look at the ethical, legal and technical aspects. It’s going to be a long process. But we have seen people starting to look into this issue,” she says.

Highlights

Re test Testing QA Spotlight
1000th issue

Re test Testing QA Spotlight

Get the latest news updates in your mailbox
Never miss out on important financial news and get daily updates today
×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.