Why police use of facial recognition technology raises privacy fears
Campaigners are concerned over the lack of regulation and infringements of civil liberties. But the police say facial recognition is effective and harmless. The FT's Patricia Nilsson reports.
Possible outstanding suspect.
With shrinking resources and the growing availability of technology, police forces have started using facial recognition technology that scans the faces of people to identify if they're on a police watch list. The South Wales Police have been using facial recognition since last summer, claiming it has led to over 450 arrests. But not everybody agrees that the technology works as well as the police claims.
People expect it to be magic and it's not magic. There's plenty of reasons to worry about it even if it were reliable, in terms of like what kind of, you know, expectations we should have of privacy.
Ed Bridges, a resident of Cardiff, is together with the civil rights group, Liberty, seeking judicial review of the police's use of facial recognition.
I was a few minutes up the road at the Cardiff International Arena taking part in a peaceful protest and looked across the street and saw that the facial recognition technology van was there filming peaceful protesters, and I think at that point I thought this is very peculiar and it's being done in a way that is trying to sort of manage the behaviour of members of the public.
If you're using a technology that puts people off protesting, then you're undermining your democratic values. The point is that there's no real transparency about how the police are using this technology, and more importantly, there is no law or guidance that restricts its use, so it's up to the police, their discretion, whether they store your image and how long they store it for.
The police, however, argues the technology is simply misunderstood.
People don't understand it and want to be reassured that we're not using it in some way which could undermine their privacy or could undermine their rights to freedom around South Wales or the country. If you're not on the watch list - and that watch list consists of people who have committed criminal offences and are wanted by us, the great majority, 99.9 per cent of people are not on that watch list - we don't even know if you've walked past it. Whilst they may challenge whether or not they committed the crime, or the circumstances, or the evidence as purported, nobody's challenged the identification of themselves as the person in that photograph.
The South Wales Police have previously compiled a watch list using images supplied to them by external agencies, such as Interpol. They say they're currently not planning on using images of people who are not in their custody image database, which consists of people who have been arrested but not necessarily sentenced.
The algorithms that power facial recognition technology scan a face to recognise key identifiers, such as the distance between a person's eyes, the eyes to the tip of their nose, or the width of their mouth. It then runs this biometric data against a database of images to determine whether it's a match.
The South Wales Police use two different systems developed by the Japanese tech group, NEC. One is mounted on top of a van that the police can drive to any location they want to scan. It integrates with real time video footage and alerts the police whenever it sees a person who it thinks is on a specific watch list.
The second lets the police take an image of a wanted person from, for example, a crime scene video and search for a match against its custody image database.
This technology is hugely scalable. And when you combine it, especially with other kinds of surveillance technologies, it'll be pretty easy to enlarge that list of people for whom the facial recognition technology works to essentially include everyone.
In the end, the future of facial recognition and law enforcement will come down to regulation. And the debate on how such a framework would look like has just started.