SINGAPORE (Sept 2): When HSBC Holdings thwarted a US$500 million ($694 million) central-bank heist, sophisticated computer software did not raise the alarm. The funds flowed undetected from Angola’s reserves to a dormant company’s account in London. It was a teller at a suburban bank branch who became suspicious, declined a request to transfer US$2 million, and triggered a review that uncovered the scam, according to one account of the episode.
That was two years ago, and the finance industry’s battle to stop the illicit transfer of as much as US$2 trillion a year around the globe has not become any easier. At least half a dozen lenders in Europe have found themselves at the centre of fresh allegations of dirty money schemes in the past year. The wave of scandals — at Denmark’s Danske Bank, Deutsche Bank and others — is undermining confidence in the industry well beyond the individual institutions involved.
Financial services executives have had little choice but to significantly step up regulatory efforts; more than one in 10 now spend in excess of 10% of their annual budgets on compliance, according to financial adviser Duff & Phelps. Banks are eager to find ways to bring that spending down — management, employees and shareholders never want to spend on what are effectively internal cops. Today, there is a sense that growth may be peaking. About two-thirds of institutions considered systemically important on a global level, a leading indicator for the industry, expect the size of their compliance teams to either remain unchanged or shrink, according to a Thomson Reuters Regulatory Intelligence report. The largest companies want to adapt their teams to grow or scale back as necessary, the report says.
That has led to buzz that banks are deploying artificial intelligence to replace surveillance staff. HSBC last year started using AI to screen transactions, and the two biggest Nordic banks have said they are replacing compliance staff with algorithms. Online banking start-ups such as Revolut, which rely on computerised efficiency to compete with established lenders, are finding compliance a challenge they need to address.
So far, machines are confined to simple know-your-customer (KYC) applications and are far from ready to replace humans, says Tom Kirchmaier, a visiting fellow at the London School of Economics’ (LSE) Centre for Economic Performance. He is not optimistic that a major advance is afoot, either. “There’s a lot of talk but no action,” he says.
Take, for example, ING Groep, which last year paid €775 million ($1.2 billion) to settle an investigation by a Dutch prosecutor into alleged money laundering and other corrupt practices. Even though the bank uses machine learning to filter out false alerts on potential bad actors, the lender has had to ramp up the number of individuals handling KYC procedures.
It has tripled compliance personnel in the Netherlands over eight years; staff dedicated to KYC account for 5% of the total number of employees.
Banks and tech companies need to overcome several obstacles for AI to succeed in tackling money laundering. For starters, they need better customer data, which is often neither current nor consistent, especially when a bank spans multiple jurisdictions. Enhancing the quality and frequency of data gathering is a crucial first step.
Banks are also constrained in their ability to detect bad behaviour, with or without computers, because competitors and national law enforcement agencies will not share data. Across Europe, for example, regulation and enforcement are split along national borders. Lenders would benefit from a common European anti-money-laundering regulator, data sharing among banks and a more open dialogue with bank supervisors, Citigroup analysts wrote in a note to clients in June.
When banks do share information, it is often unhelpful. They tend to over-report suspicious activity to the relevant agencies to shed responsibility, but enforcement authorities typically do not provide their findings to the financial companies. In addition, banks, wanting to shield bigger clients from unnecessary scrutiny, often under-report activity they should be flagging, according to the LSE’s Kirchmaier. That leads to potentially suspicious transactions being classified as normal. The algorithms learn to replicate those types of decisions.
In short, the historical dataset available to train the machines is misleading, complicating their ability to learn detection.
Criminals, by contrast, are constantly adapting their ways, finding new routes for their cash when existing ones are blocked off. Catching tomorrow’s money launderers requires anticipating where they will move next. Will they trade gold or crypto assets? When parameters change even slightly, AI struggles to stay ahead of the criminals.
Trust in financial services after the 2008 crisis is taking a very long time to rebuild. Banks are wary that they risk teaching machines to stereotype customers based on where they come from or where they do business. “Ethical concerns associated with AI are rightfully restraining banks’ full embrace of machine learning,” says Alexon Bell, chief product officer at Quantexa, a London-based data analytics company that counts HSBC among its customers.
Regulators, frustrated with the slow speed of change, have encouraged banks to deploy more technology. In December, the US Treasury Department’s Financial Crimes Enforcement Network, jointly with the Federal Reserve and other US agencies, called on banks to try new approaches to meet anti-money-laundering requirements, including AI, and have offered leniency if the tools uncover deficiencies in existing systems.
One thing seems clear: Compliance spending at banks may be shifting away from employing humans to adopting new software. For now, those living and breathing internal cops are here to stay.