Hacker using laptop. Lots of digits on the computer screen.

The science and technology committee of the House of Commons published the responses to its inquiry on “algorithms in decision-making” on April 26. They vary in length, detail and approach, but share one important feature — the belief that human intervention may be unavoidable, indeed welcome, when it comes to trusting algorithmic decisions.

Automation has already transformed agriculture and industry. Today, brown and blue collars are in a minority. About 80 per cent of US jobs are in services. Most of us deal with data and software in our working lives, not with bioware or hardware. The trouble is that computers eat data and software for breakfast. The digital revolution is now threatening white-collar jobs.

This is not because digital makes technology intelligent, but because it makes tasks stupid — in other words, no intelligence is required to perform them successfully. Once this happens, algorithms can step in and replace us.

The consequence may be widespread unemployment today, but could also mean new jobs tomorrow. Unemployment in the eurozone is still above 9 per cent, for example. Yet in Germany, the demand for engineers is higher than the supply. The same holds true in the UK. And according to the World Bank, by 2030 the world will need 80m healthcare workers, double that in 2013.

In a society in which algorithms and other automated processes are increasingly apparent, the important question, addressed by the select committee, is the extent to which we can trust such brainless technologies, which are regularly taking decisions instead of us.

Now that white-collar jobs are being replaced, we may all be at the mercy of algorithmic errors — an unfair attribution of responsibility, say, or some other Kafkaesque computer-generated disaster. The best protection against such misfires is to put human intelligence back into the equation.

Trust depends on delivery, transparency and accountability. You trust your doctor, for instance, if they do what they are supposed to do, if you can see what they are doing and if they take responsibility in the event of things go wrong. The same holds true for algorithms. We trust them when it is clear what they are designed to deliver, when it is transparent whether or not they are delivering it, and, finally, when someone is accountable — or at least morally responsible, if not legally liable — if things go wrong.

This is where humans come in. First, to design the right sorts of algorithms and so to minimise risk. Second, since even the best algorithm can sometimes go wrong, or be fed the wrong data or in some other way misused, we need to ensure that not all decisions are left to brainless machines. Third, while some crucial decisions may indeed be too complex for any human to cope with, we should nevertheless oversee and manage such decision-making processes. And fourth, the fact that a decision is taken by an algorithm is not grounds for disregarding the insight and understanding that only humans can bring when things go awry.

In short, we need a system of design, control, transparency and accountability overseen by humans. And this need not mean spurning the help provided by digital technologies. After all, while a computer may play chess better than a human, a human in tandem with a computer is unbeatable.

The responses to the select committee inquiry are clearly good news. They show that there is plenty of intelligent work for humans to do in the future. But it won’t be white-collar workers filling these new positions. It will be experts who can take care of the new digital environment and its artificial agents. Algorithms are the new herd. Our future jobs will be in the shepherding industry. The age of green collars is coming.

The writer is professor of philosophy and ethics of information at the University of Oxford

Get alerts on UK employment when a new story is published

Copyright The Financial Times Limited 2022. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article