© FT Montage/Getty Images

For years, the financial services industry has sought to automate its processes, ranging from back-end compliance work to customer service. But the explosion of generative artificial intelligence has opened up both new possibilities, as well as potential challenges, for financial services firms.  

“The entire world is in a discovery phase right now,” says Saby Roy, a technology consulting partner at EY. “We’re seeing many organisations trying to put it to real use.”

Applications of AI in financial services

AI is already being used to try to improve the customer experience when dealing with financial services groups. Many consumers are familiar with basic iterations of “chatbots” on the websites of banks and retailers, but these tend to have limited functionality and rely on a series of predefined answers.

Financial institutions now hope that generative AI could replace these systems with alternatives that are more capable of responding to complex requests, learning how to deal with specific customer needs, and improving over time.

“If you look at the customer service side, we’re seeing a lot of interest from clients into how they can put generative AI into action around chat channels,” says Rav Hayer, a managing director at management consultancy Alvarez & Marsal’s digital practice. “There’s a lot of discussions around conversational finance.”

Another area in which automation has already taken hold is lending. Here, AI systems are being used to look over documentation and speed up the assessment of whether a consumer can afford credit products, such as mortgages. 

“We have 15 different AI models live on our platform, performing different functions,” explains Stuart Cheetham, chief executive of mortgage lender MPowered Mortgages. Different models check which bank a statement is from, examine its veracity, and transform it into machine readable data which can be used to help make a decision.

However, the system is not fully automated, Cheetham says, with humans still involved in making the final decision. Under the General Data Protection Regulation, consumers have some protections from fully automated decision making, in which no humans are involved.

“We don’t allow any black box AI to be used near a decisioning process,” he says, referring to systems whose processes cannot be clearly explained. 

At the other end of the scale, AI is also finding applications in investing — helping fund managers to turn raw data into something that can be used to make smart choices, of shares or other asset classes.

“It gives you a much more forward looking view,” says Hal Reynolds, co-chief investment officer at Los Angeles Capital. “It allows you to understand information a lot more efficiently so you can be prepared to make a good investment decision.”

Among the data sets that their systems study are executives’ calls with analysts, in which they can scan for clarity of purpose, analyst responses, and whether companies’ results live up to what their bosses are saying.

Firms are also adapting generative AI to help fight financial crime, with a broad range of use cases — including the slow and expensive, but vital, field of anti-money laundering and ‘know your customer’ protocols.

Gains from the use of AI  

“It’s all about saving minutes which leads to hours,” says Guðmundur Kristjánsson, founder and chief executive of Icelandic fintech Lucinity, which uses AI to support bank staff trying to detect money laundering and other illicit behaviour.

Lucinity’s “co-pilot” system, Luci, turns alerts about transactions and individuals into text, allowing agents to assess them more quickly, and can write a summary of the case, speeding up agents’ ability to work through their caseload and deal with more potential issues.

“I’ve been in AI really for 15 years — the rate of innovation has become so fast,” Kristjánsson observes. “The tools are getting more accessible, so that a little company like ours can reap the benefits.”

Larger players are also using AI to fight fraud, a problem which cost the UK £1.2bn in 2022 according to industry trade body UK Finance, including Mastercard.

Earlier in July, the payment processing group unveiled its new Consumer Fraud Risk system, which offers banks an individual score for how likely a transaction on the UK’s Faster Payments network is fraudulent within milliseconds, based on earlier work on “money mule” accounts used for money laundering.

High street bank TSB, which has been trialling the system since January, estimated that it could reduce cases of authorised push payment fraud — in which users are tricked into sending money to criminals — by about 20 per cent.

“The scale and delivery are what’s different with AI,” says Ajay Bhalla, Mastercard’s president of cyber and intelligence. “Banks can stop the forward fraud before the money is transferred.”

Risks from the use of AI

But experts are also concerned about the risks of AI, including its ability to enable financial crime. Alvarez & Marsal’s Hayer highlights concerns that fraudsters will implement generative AI to make their attempts to steal data and money more effective — for example, by better impersonating a senior colleague in an email.

Earlier deployments of automated tools have also faced controversy over the impact of their failures, such as wrongful arrests in the US because of the limitations of facial recognition technology. For Hayer, that means that it’s crucial that institutions look at risks as much as the opportunities.

“Governance is going to be absolutely key,” he says. “How do you reap and spread the benefits of AI without unleashing a host of unintended consequences or creating something that’s ultimately destructive?”

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments