Continue reading this on our app for a better experience

Open in App
Floating Button
Home News Artificial Intelligence

Public and private sectors view ethical artificial intelligence priorities differently, according to EY global study

Felicia Tan
Felicia Tan • 3 min read
Public and private sectors view ethical artificial intelligence priorities differently, according to EY global study
Majority of the policymakers surveyed agree on the ethical principles most relevant for the different applications of AI.
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

A global survey conducted by EY in collaboration with The Future Society found that there were significant differences in how the public and private sectors view the future of ethics, governance, privacy, policy, and regulation of artificial intelligence (AI) technologies.

The survey polled some 71 policymakers and over 280 global organisations, who ranked ethical principles by importance for 12 different AI scenarios.

According to the report, discrepancies in AI exist in four key areas, namely fairness and avoiding bias, innovation, data access, and privacy and data rights.

Majority of the policymakers surveyed agree on the ethical principles most relevant for the different applications of AI. For instance, policymakers rated “fairness and avoiding bias”, and “privacy and data rights” as the top two concerns on the use of AI for facial recognition purposes.

In the private sector, responses were more evenly differentiated. Top choices in the sector were principles prioritised by existing regulations instead of issues such as fairness and non-discrimination.

Ethical AI is not a priority for Singapore’s private sector.

Only 15 companies had adopted the second edition of the country’s Model AI Governance Framework released in January 2020.

The lack of AI governance may prove to be a concern, according to Cheang Wai Keat, Singapore head of consulting at Ernst & Young Advisory.

“Significant misalignments around fairness and avoiding bias generate substantial market risks, as companies may be deploying products and services poorly aligned to emerging social values and regulatory guidance. However, companies that are able to establish trust in their AI-powered products and services will be at an advantage,” he says.

While it is generally agreed among policymakers and companies that a multi-stakeholder approach is needed for AI governance, there is disagreement on what form it will take, according to the survey.

About 38% of organisations expect the public sector to lead the framework while only 6% of policymakers agree.

EY says the disconnect poses potential challenges for both groups in driving governance forward.

Benjamin Chiang, EY Asean and Singapore government and public sector leader at Ernst & Young Advisory says it “must be a national imperative” to bridge the disconnect.

“Governments must make closing the AI trust gap a top priority, as this will allow the acceleration of digital transformation necessary to address the urgent public health, social and economic challenges before us,” he says.

Each group also has blind spots when it comes to the implementation of ethical AI with 69% of companies agreeing that regulators understand the complexities of AI technologies and business challenges, while 66% of policymakers disagreed.

Highlights

Re test Testing QA Spotlight
1000th issue

Re test Testing QA Spotlight

Get the latest news updates in your mailbox
Never miss out on important financial news and get daily updates today
×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.