© Dreamstime

“Conversation is the new interface.” The tech community’s use of this expression does not mean that we are going to start talking to each other more on a human-to-human, face-to-face level. It means rather that we will be talking more to robots, specifically those artificial intelligence programs that pop up on the likes of Facebook Messenger, Twitter, Slack and so on, to help you with tasks from scheduling to shopping.

Rather than going to a website to find information or downloading yet another app, we will summon these artificial intelligence assistants to do our bidding. Bark “find me a flight to Chicago on Saturday” into your phone and a “bot”, which understands your location and the fact that you mean this Saturday, will return with some choices.

Chatbots will become the predominant way we interact with companies. Apple’s Siri, Microsoft’s Cortana, Google Now, Amazon’s Alexa and Facebook’s Messenger are among prominent examples. In addition CB Insights, a venture capital database, has identified 21 start-ups building virtual assistants designed to help with everything from finding restaurants to monitoring your health. These have collectively raised over $120m in funding from venture investors.I

You know the trend for chatbots is exploding when an accounting software company builds one. Sage, which has been providing small and medium businesses with business software for more than 35 years, is planning to launch a chatbot this summer. This will help anyone from freelancers to small business owners manage invoices and expenses. Send the bot a picture of a receipt, for example, and it will store this away with your expenses claims.

Kriti Sharma, who built the bot for Sage, says one of the challenges has been ensuring that it is not annoying. She has spent considerable time thinking about how the bot should react in different situations. Ms Sharma was keen to avoid a fiasco like Microsoft’s Tay chatbot, which was taught to parrot hate-filled posts by Twitter users.

Swear at it and the Sage bot responds with a sad face emoji and says “I’d rather talk about accounting”. Tell the bot you love it and it says you have excellent taste before gently steering you back to accounting. The responses sound remarkably similar to the self-deprecating but quietly determined way that Ms Sharma herself speaks.

“We are very careful about the frequency with which the bot responds, and we have built the personality over time,” says Ms Sharma. “I now need to hire someone to take this further — a bot personality trainer.” Such a person might be a creative writer rather than a technical specialist, she says.

A vision of a future job flashes in front of me: as newspapers shed journalist jobs in the face of declining ad revenues we are gradually rehired as microcopy writers providing the “voice” for robot interactions.

Companies would do well to invest in bot personalities. Remember Clippy, the animated paper clip that used to pop up on Microsoft programs when you were trying to type something?

“It looks like you are writing a letter,” Clippy would observe, sending users into such paroxysms of rage that Microsoft itself was openly mocking Clippy when it scrapped the software-based help system in 2002.

Kriti Sharma
Kriti Sharma: I now need to hire a bot personality trainer

People dislike computer agents who disregard the human rules of etiquette, observed Stanford University student Luke Swartz in his 2003 thesis: “Why people hate the paper clip”.

The difference between Clippy and today’s chatbots is the arrival of deep learning, which makes computers capable of a far more complex level of pattern recognition and allows them to follow natural language and speech better.

However, they are still not “smart”, says Jonathan Mugan, co-founder and chief executive of Deep Grammar, a start-up company building a grammar-checking bot. “There is no way for a chatbot to be interesting or terribly useful right now,” says Mr Mugan. “The bot still doesn’t know what ‘Chicago’ is, it is just following trend data.”

The real revolution will be when bots understand the meaning of words as a human would, associating them with experiences it has had. This is not impossible, says Mr Mugan, although it would require a huge acquisition of data, with a robot possibly living with humans for several years to learn associations as child might. Helping an artificial intelligence program reach the level of understanding that a four-year-old child has is the goal, Mr Mugan says.

Of course, only one robot would have to do it once and then the programme could be copied a limitless number of times. Until then, prepare for some frustrating virtual interactions.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments