Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

Can AI help us build a more ethical world?

Edmond Awad and Theos Evgeniou
Edmond Awad and Theos Evgeniou  • 7 min read
Can AI help us build a more ethical world?
By building more ethical AI, we're also furthering our understanding of our own inner moral world. Photo: Possessed Photography/Unsplash
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Understanding human nature has been a millennia-long question, with implications ranging from how we develop policies, economic theories, to how we organise and govern ourselves.

Despite years of scholarship on topics that relate to the philosophy of ethics and moral judgement of humans, our understanding of ethics still falls short. For one, we have an incomplete picture of how humans make moral decisions. For example, we do not have a clear understanding yet of why some moral decisions may feel more intuitive to some and less so to others.

Moreover, there is no agreement over what we should do, even among experts, in moral trade-offs. Indeed, not only is ethics elusive, but some situations may even uncover incoherence in our general moral principles and push us to reconsider our judgement.

Can the recent rise of AI help us disentangle all these challenging questions and even lead us to a more ethical world — consistently so? Recently, the discussion — of businesses, regulators and civil society — has largely shifted from “AI systems” to “responsible AI systems”. But perhaps there is yet another area where AI can play a role: understanding our own human nature while also building ethical AI systems.

This is what a number of us — about 20 researchers from global academic institutions — are proposing in a recent article on “Computational Ethics” published in the Trends in Cognitive Sciences journal.

Our argument is that we may be at a turning point where, with novel interdisciplinary work, we not only build better and ethical AI systems but also further our understanding of millennia-long questions about our own inner moral world.

See also: Keys to achieving human-centred automation testing

Computational Ethics builds upon another idea, proposed almost 50 years ago by the late David Marr and MIT’s Tomaso Poggio, which became what is now known as Computational Neuroscience. The foundations laid out in this area decades ago not only led to some of the mightiest AI systems today, such as computer vision ones, but also new ways to study our brain and understand its functions such as our vision system.

The next frontier

Computational Ethics can now tackle the next frontier: understanding the parts of our mind and brain that have to do with ethics and “higher” cognitive functions, while also helping us design and develop ethical AI systems. The key to achieving this dual goal is in characterising ethics in computational or algorithmic terms — in other words, formalising it.

See also: Human element still important for effective mass communication

However, formalising ethics presents many challenges. A key requirement of formalising ethics is consistency — cases and situations that can be characterised by the same relevant features should be judged similarly. When considering a complex system (like many AI systems, or the human mind), maintaining consistency while formalising ethics can become a very challenging task.

In such complex systems, different parts are related to each other, and they function interdependently. Therefore, revising some of these parts in the hope of restoring consistency can introduce new inconsistencies in the other parts.

Additionally, the “input data” in such complex systems is high-dimensional, making the task of tracking all possible inputs intractably large and uncovering existing inconsistencies extremely difficult. This makes understanding the inner workings of AI systems an elusive task, and it creates major challenges for businesses and regulators.

Another crucial, relevant point that complicates matters further is that evaluating ethical behaviour of humans and machines can be challenging. First, it is not always clear what the reference point is. Depending on the goal, we may want machines to match human behaviour or we may want them to match some ideal behaviour that humans themselves cannot achieve. Neither of these is easily defined.

Second, different ethical values assume different ways of evaluation, making them difficult to compare. How should we balance two values, such as fairness and privacy, when they are in conflict, and how should we balance different conflicting versions of the same value like the different definitions of fairness?

Indeed, there is a well-known impossibility theorem that shows we cannot ensure that AI systems are fair simultaneously across several well-accepted definitions of fairness. In other words, we cannot have it all no matter how hard we try.

Third, the human mind and most state-of-the-art AI techniques are opaque and complex. And it is hard, if not impossible, to understand the working of such systems by looking inside (that is, the brain or the AI code or data). Currently, a lot of effort is spent on what is called explainable AI and transparent AI, namely algorithms that can help us understand why and how an AI system reached its decisions.

To stay ahead of the latest tech trends, click here for DigitalEdge Section

Finally, groups of humans and machines working together form yet another (new) complex system in which it becomes harder to understand and study ethical behaviour of humans and machines.

Regulators emphasise the concept of accountability for AI systems, but defining it in practice will likely prove challenging given the “humans plus machines’’ systemic complexities. Better understanding all these questions and building better and ethical AI systems on the way requires new research methodologies spanning multiple fields.

Developing ethical algorithms

Formalising our theories of ethics can enable progress in both developing ethical AI systems and in understanding human ethics. It would also encourage a sustainable exchange between these two so that our understanding of human ethics informs our attempt at engineering ethical machines, while going through the process of developing ethical algorithms will offer an opportunity to examine and understand our ethics.

To understand this, consider the case of a healthcare professional who has recommended a particular treatment for their competent adult patient, but the patient has rejected it. Should the healthcare professional challenge the patient’s autonomy? What if the recommendation is from a US Food and Drug Administration-approved AI medical device (there are already a few hundred of them in the market)?

What if they believe that the patient is refusing the treatment irrationally due to external influences? Should insurance companies be allowed not to make reimbursements when people go against the recommendations of regulatory-approved AI medical diagnosis devices — proven to be more accurate at diagnosis than the best human experts?

While healthcare professionals make intuitive, context-sensitive judgement in cases like this, their judgement are often guided by general principles or policies, such as concerns about autonomy, non-maleficence and beneficence. Nevertheless, such policies are usually general, abstract and highly underspecified, while the actual decisions made by individuals in specific concrete cases end up being influenced by complex, implicit principles, many of which re- main unstated and may even conflict.

On the one hand, attempting to create an eldercare robot that would make decisions like the one described above would certainly benefit from existing scholarship on how healthcare professionals make decisions and the policies that guide their decisions. On the other hand, going through the implementation process would also require specifying and formalising principles of moral decisions in the relevant context and thus would help us examine points of conflict or incoherence in human judgement.

What we are having now with ethics, we had it once with vision. Much like computation helped us develop computer vision and better understand human vision, the new proposed agenda can help us resolve the millennia-long questions about human ethics and at the same time build a responsible AI and tech world that is safe and beneficial for all.

Edmond Awad is a lecturer (Assistant Professor) at University of Exeter Business School, associate research scientist at Max- Planck Institute for Human Development, and a Turing fellow at Alan Turing Institute.

Theos Evgeniou is a professor at Insead, co-founder of Tremau, a member of the OECD Network of Experts on AI, World Economic Forum academic partner on AI, and an adviser at BCG Henderson Institute.

"Computational Ethics", published in the Trends in Cognitive Sciences journal, is co-authored by 20 authors across 5+ disciplines and top universities. https://www.sciencedirect.com/science/article/pii/S1364661322000456

Highlights

Re test Testing QA Spotlight
1000th issue

Re test Testing QA Spotlight

Get the latest news updates in your mailbox
Never miss out on important financial news and get daily updates today
×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.