From left: Google's Home, Microsoft's Cortana, Amazon's Echo and Apple's Siri AI devices © FT Graphic/ Getty

Humans have so far failed to keep up with the scale and sophistication of cyber attacks — so security companies are now starting to put their faith in artificial intelligence to protect networks from hackers.

From Apple to Twitter, tech companies are snapping up artificial intelligence start-ups and using the technology to do everything from predicting customer behaviour to interacting with users via virtual personal assistants.

But the security industry in particular has become excited about the

potential of so-called machine learning, where computers learn without being explicitly programmed. For security companies, the growth of more sophisticated artificial intelligence promises the opportunity to catch up with hackers, who experts say have the upper hand.

For example, as the industry struggles to find qualified engineers, many companies are turning to artificial intelligence to supplement their workforces.

Tomer Weingarten, chief executive at security software provider SentinelOne, says cyber security is one of artificial intelligence’s most promising applications. “It can look at all the behaviours and interactions that happen on a given machine, the malware [cyber attack software], what happens when someone is attacking you, to learn what ‘badness’ looks like, how an attacker behaves and what they will do once they try to compromise the device,” he says.

Artificial intelligence can perform the role of many lower-level employees and it may increasingly need to do so. Last year in the US 209,000 cyber security jobs remained unfilled, and this could rise to 1m-2m globally by 2019, says a report by Intel Security and the Center for Strategic and International Studies.

The ideal is artificially intelligent computers that can stop themselves from being attacked, such as by hunting for programming weaknesses and fixing them. The first steps towards this goal were taken earlier this year at the annual hacker conference Def Con in Las Vegas, when Darpa, the US Defense Advanced Research Projects Agency, known for supporting self-driving cars and GPS, ran a contest to invent such a machine, with teams building computers to compete against each other.

Seven teams from universities and private companies took part in the Cyber Grand Challenge, which was won by ForAllSecure, a team from Pittsburgh, and its computer Mayhem.

Darpa put up a $2m prize in the hope that the competition would change the future of cyber security and encourage others to explore the possibilities of using artificial intelligence to defend computer networks.

© Øivind Hovland

Artificial intelligence cannot yet operate completely independently of humans: even in the Darpa challenge, the computers were not good enough to beat humans. But Mr Weingarten says artificial intelligence can supplement his “heavy-duty security researchers”, adding to their understanding of how hackers behave by highlighting what is happening deep inside a machine.

“Some things happen at a kernel level [the nucleus of a machine] during execution [of an attack] that a human wouldn’t be able to [notice],” he says.

For example, artificial intelligence might be able to spot when ransomware — malicious software that codes files and demands a payment to unlock them — begins to encrypt documents. It may even be able to stop an attack in its tracks.

Lawrence Pingree, a security specialist at Gartner, the research company, says artificial intelligence is at its most effective when it can reduce the number of “false positives” — events flagged as attacks that turn out not to be — that professionals have to sift through.

“Generally, [artificial intelligence] can identify malware really accurately but it can’t explain why it is there or who is behind the attack. The end goal is it could describe the malware in the context of all the interactions on the network today,” he says.

Some artificial intelligence-based security software is good at building visualisations of a network to help people explore potential weak points and problems more clearly, Mr Pingree adds.

Shuman Ghosemajumder, chief technology officer for Shape Security, has been working with artificial intelligence since he led Google’s efforts to protect the search engine from click fraud — repeatedly clicking on an advertisement to make it seem more popular than it is.

It was essential then to analyse the billions of clicks that happen every day.

At Shape, he uses artificial intelligence to protect web pages and mobile apps against automated cyber attacks from botnets — computers co-ordinated to launch a cyber attack without their owners’ knowledge. “We’re looking at hundreds of different signals to analyse the ways that real human activity should look in every single transaction.”

But criminal gangs have followed the development of artificial intelligence with interest. Many are using similar tools to imitate human behaviour, simulating how someone might log on to a website rather than using crude “stuffing credentials” techniques — the mass use of stolen customer details at one time to gain access to accounts.

Mr Ghosemajumder says: “[Hackers] are generating fake mouse movements, keystrokes, technologies that vary typing speeds to get around tools for detecting [too many of the] same typing speeds, and making it look like they’re coming from different browsers.”

Despite the industry’s high hopes, Mr Pingree warns that some companies use terms such as “artificial intelligence” and “deep learning” just for marketing. “The biggest trap some providers fall into is they say they have machine learning when they really don’t,” he says.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments