A live demonstration uses artificial intelligence and facial recognition in dense crowd spatial-temporal technology at the Horizon Robotics exhibit at the Las Vegas Convention Center during CES 2019 in Las Vegas on January 10, 2019. (Photo by DAVID MCNEW / AFP) (Photo credit should read DAVID MCNEW/AFP/Getty Images)
By delegating too many decisions to algorithms, we risk losing the ability to take the creative risks on which progress depends © AFP

Recently, I chaired a “strategy surgery” to which executives brought dilemmas, in search of a prescription to ease their strategic aches and pains.

“When and why should my company deviate from its strategy framework?” asked one member of the Financial Times 125 forum, which hosted the session. Another participant pointed out that strategy is “the art of making difficult choices”. Art it may be, but science is playing an increasing role in the selection.

It is already more than three years since DeepMind’s AlphaGo computer system learnt how to defeat the planet’s best human player of the pure-strategy game Go. Hedge funds have long sought to express successful trading and investment strategies in computer code, “to get above our emotional attachments to our own conclusions”, in Bridgewater founder Ray Dalio’s words.

Artificial intelligence-based algorithms are “supercarriers” of Dalio-style decision-making, according to a provocative new essay by Dirk Lindebaum, Mikko Vesa and Frank den Hond. Unchecked, one extreme outcome would be a sort of strategy singularity, in which slavish deference to the algorithm destroyed managers’ ability to evaluate plans at all.

The essay draws inspiration from the dystopian EM Forster short story “The Machine Stops”, published in 1909. Forster imagined a world in which enfeebled individuals live in honeycombed rooms underground, part of a totalitarian technology-dependent system. They rely on the “Machine” for communication, entertainment, and spiritual and physical sustenance. Most humans’ decision-making ability has atrophied. Those who rebel are exiled to the earth’s surface.

When the Machine suffers a cataclysmic failure, the underground society collapses with it. In a glimmer of hope, one doomed human suggests “humanity has learnt its lesson”.

Edwardian science fiction may seem a poor guide for strategists in the here and now. But Prof Lindebaum told me he and his co-authors, without demonising technology, wanted to use Forster’s tale to inform practising managers of the limits and risks of AI. He warns managers not to “end up in a state of learnt helplessness, through lack of first-hand experience”. If they delegate too many decisions to the machine, they will also lose the ability to innovate and take the creative risks on which progress depends.

In fact, advocates of machine learning are often among the first to point out these boundaries.

David Yang’s latest venture is Yva, an AI engine that uses corporate data to predict, among other things, whether key staff are likely to resign. Measured by number of neurons, even the most sophisticated AI is barely as bright as a bee, he told me. No wonder he recommends executives treat AI-powered counsel like a “second medical opinion”, rather than relying on it for strategic direction.

Andrew McAfee and Erik Brynjolfsson wrote in their last book Machine, Platform, Crowd, of the danger of “Hippo” decision-making, based on the “highest-paid person’s opinion”. In theory, the computer offers an algorithmic antidote; in practice, as plenty of examples attest and they acknowledge, it may also reproduce the bias of its programmers or the data they use. Even at their most techno-optimistic, McAfee and Brynjolfsson still recommend putting “human intelligence in the loop, intelligently”, if only to inject common sense into algorithmically driven recommendations.

The old-fashioned analogue strategy plan can encourage a similar dependency. In 1994, management professor Henry Mintzberg wrote about the pitfalls and fallacies of strategic planning, in the early days of data analytics: “The formal systems could certainly process more information, at least hard information [than humans]; they could consolidate it, aggregate it, move it about. But they could never internalise it, comprehend it, synthesise it”.

I asked Prof Mintzberg whether he had changed his view over the past 25 years. Not at all, he replied. Complex strategic dilemmas — what he calls “puzzling puzzles” — are now subject to overprecise programmed solutions.

In other words, the human strategist is still required, precisely to decide when and why to deviate from the strategy framework, whether that frame is built by a committee of people or a highly intelligent machine.

Prof Lindebaum and his co-authors suggest the death knell for human decision-making will sound the first time a court rules a doctor has killed a patient by ignoring the recommendations of an AI-based diagnostic tool. By extension, beware of the first shareholder lawsuit against a board that overrides the Machine’s strategy plan.

andrew.hill@ft.com

Twitter: @andrewtghill

Letter in response to this column:

Cities are ideally placed to set the standards for AI / From Dr Aisha Bint Butti Bin Bishr, Dubai, UAE

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments