BPN92J WARWICK DAVIS (ANDROID MARVIN) MARTIN FREEMAN & MOS DEF THE HITCHHIKER'S GUIDE TO THE GALAXY (2005)
Marvin the paranoid android in a scene from Hitchhiker's Guide to the Galaxy © AF Archive/Alamy

Financial professionals and their court attendants of consultants and journalists have become overawed by the prospect for artificial intelligence and machine learning. Faced with their own insignificance they are throwing money and attention at these technologies.

They are not thinking carefully about how AI in general, and machine learning in particular, are useful rather than totemic. Most people in the financial world grasp accounting and statistics. Unfortunately they are too invested in their dignity to ask stupid questions about arcane engineering, let alone science.

That is too bad, because fields such as AI benefit from critical thinking.

Machine vision and natural language processing have improved enormously thanks to advances in machine learning, much of which can be credited to advances in the analytic power of artificial neural networks.

To be specific, vision and translation forged ahead thanks to “deep learning”, which became effective because a method called backpropagation made it possible for these systems to learn how to be more accurate. The machines use millions of trial and error tests to learn how to interpret an image or a foreign-language sentence.

As one AI data scientist says, “AI has been machine learning, which has been ‘backprop’. That is what has blown up this bubble.” The casual observer might also note that both image interpretation and machine translation have been key goals of technical intelligence agencies, which have spent billions on their trial-and-error iterations. The leading countries in AI include the seemingly pacific Canadians, the warlike Americans and Israelis, and the control-freak Chinese. All have technical intelligence expertise.

So the US uses machine vision to find missile launchers and China uses it to identify the movements of politically suspect people. Industrial companies use machine vision to inspect parts and guide assembly robots.

Social media groups use natural language processing to determine users’ consumption patterns or vulnerability to political messages. Voice-driven customer interfaces reduce the need for humans’ time.

Useful applications, but a long way from a universal key to social value and profit. For example, a language translation app with 99 per cent accuracy would be transformative but a self-driving car app using machine vision with 99 per cent accuracy in seeing pedestrians, would be (and has been) a tool for vehicular homicide.

What if an AI fintech mortgage app used historical data to disproportionately deny finance to ethnic minorities? That would be a compliance and litigation nightmare.

Many portfolio investors and commentators do not understand how hard it is to track how deep-learning algorithms make decisions. What makes them “deep” are the many layers of artificial neurons that weigh the utility of decision-making paths.

That may be a partial description of the by-guess-and-by-God way that humans think but people can come up with after-the-fact explanations and reasoning. Deep learning programs do not, inherently, do that.

If you are a compliance officer trying to understand why an investment decision was not made using inside information, or if you are a project manager trying to tweak the program for the better, this could be frustrating. This is what AI people call the shortcomings in the “interpretability” of machine learning. To get beyond these will probably require more hard-won scientific insights for a better theoretical framework.

And while fintech promoters like to thrill the punters with talk of all-seeing robo-portfolio-management, deep learning really needs more data than financial price time series can offer. Dakota Killpack, a data scientist with Predata in New York, says: “Each price (in a time series) contains a lot more information than the relatively symmetric problems of vision. AI is weak where humans need their full decision-making capability.”

With too little data, machine learning can “overfit” solutions. That means the program is not learning how to analyse the data, it is just providing an overlarge description. This leads to systemically dangerous behaviour such as investment herding.

Generally speaking, AI techniques work best when they augment rather than replace human analysis.

For example, Marek Bardonski, a machine-learning scientist with Sigmoidal in Warsaw, says: “In radiology, the algos are better than specialist doctors who are not so well educated but not as good as US radiologists. But in less developed countries we can (classify) 10 per cent of the images to get 50 per cent of the cases.”

AI systems have been oversold at times, especially in the fintech space. They have a long way to go before they replace their counterparts in biological systems.

Get alerts on Fintech when a new story is published

Copyright The Financial Times Limited 2022. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article