Potential visitors to the US were recently confronted with a new requirement on the long and complicated visa application form — to provide information about their social media identities.
Many shrugged off the addition, having long assumed that immigration authorities would in any case trawl through publicly available information. The US state department insisted the measures would enhance national security.
But some people, particularly within academia and civil society groups, expressed alarm. To them, it marked the latest in a series of invasive moves by governments to expand the use of technology to screen travellers, brushing aside privacy and safety concerns.
Such efforts, which are gaining ground across the world, stretch from social media screening for visa applications to biometric recognition in airport security and border control, according to Mana Azarmi, policy counsel at the Center for Democracy and Technology.
Screening travellers based on their social media accounts “jeopardises free speech and freedom of association . . . and can be incredibly chilling of free speech and academic freedom”, says the digital rights non-profit adviser.
Ms Azarmi highlighted the case of Ismail Ajjawi, an incoming Harvard university undergraduate and Palestinian national who in August was denied entry to the US following an hours-long interrogation that he says involved questions about his friends’ political posts on social media. He was subsequently permitted to enter the US and commence his studies the following month.
But beyond concerns about civil liberties, some argue that the measures are ineffective, particularly where automated systems are involved. “[The department of homeland security’] pilot programmes for monitoring social media have been notably unsuccessful in identifying threats to national security,” say researchers at the Brennan Center for Justice, a public policy think-tank.
“Even more damning are [the US Citizenship and Immigration Services’s] own evaluations of the programmes, which showed them to be largely ineffective . . . [and] these difficulties . . . are compounded when the process of reviewing posts is automated,” they add.
Another area of contention is the use of biometric screening in airports, by both private companies such as airlines as well as government agencies. Such systems, which typically use a combination of facial and fingerprint recognition, are moving beyond pilot programmes.
London’s Heathrow airport last year outlined plans for a £50m project to implement biometric check-in, security and boarding that it said would “streamline the passenger journey”. Airlines including Qantas and Delta have also trialled similar systems.
But privacy advocates, particularly in the US, say that private companies collecting biometric data have few legal limits on how they can share and use that information, prompting bipartisan attempts at legislation to limit commercial facial recognition.
Governments, too, have embraced the use of technology at immigration checkpoints. US Customs and Border Protection said this year that it aimed to cover 97 per cent of departing travellers with biometric checks. Searches of travellers’ electronic devices by CBP rose more than 50 per cent between 2016 and 2017, the most recent year statistics were released by the agency.
Even the EU, despite its reputation for tighter technology regulation, has implemented biometric screening at border crossings for non-EU visitors, and has funded an effort called “iBorderCtrl” to test automated lie-detection technology at ports of entry.
Some governments are taking technology-enabled surveillance at their borders further. In July, research from German cyber security group Cure53 found that Chinese authorities were installing intrusive data extraction software on phones at the border crossing between Xinjiang province and neighbouring Central Asian countries. The app, called Fengcai or BXAQ, scans the smartphones of foreign tourists for “forbidden” files and collects users’ call logs, contacts and text messages.
Neither is such surveillance restricted to autocracies. According to a report by the Carnegie Endowment for International Peace think-tank, “liberal democracies are major users of AI surveillance”, with 51 per cent of “advanced democracies” already deploying such systems.
Civil rights groups worry that the moves are being implemented without proper consideration, as the technology remains imperfect and vulnerable to cyber attacks. “We’re concerned about mission creep . . . that customs and border protection authorities will be asked by other [law enforcement] agencies to run searches on their database. We’re also concerned about digital discrimination . . . the [facial recognition] technology works less well for people of colour and women,” says Ms Azarmi, adding that a number of these databases have already been breached by hackers.
“The results of a breach of face recognition or other biometric data could be far worse than other identifying data, because our biometrics are unique to us and cannot easily be changed,” said the Electronic Freedom Foundation, a digital rights non-profit group, in a report earlier this year.
The EFF encourages travellers to be vigilant when passing through airports, to remain on the lookout for signs that biometric scanning may be about to occur, and to insist on human verification wherever they might be eligible. The group recommends that travellers limit the data they carry across borders, encrypt their devices and store information on the cloud while they travel.
“It’s important to assess your threat model before you travel. If you are concerned about your travel, it’s good to have the contact information of an attorney on hand,” says Ms Azarmi. “And on the private company side, you can usually opt out of those [biometric] services for now.”
Get alerts on Cyber Security when a new story is published