Listen: Kasparov on risks and rewards of AI
Human beings have to be creative to understand how best to make use of AI, former world chess champion Garry Kasparov tells John Thornhill
Presented by John Thornhill. Produced by Fiona Symon.
Transcript
You can enable subtitles (captions) in the video player
[MUSIC PLAYING]
JOHN THORNHILL: Hello, and welcome back to Tech Tonic, a podcast that looks at the way technology is changing our lives. I'm John Thornhill, innovation editor at the Financial Times in London. Last week we talked to Rick Howard, chief security officer of Palo Alto Networks about the current state of cybersecurity. This week we hear from a chess champion, whose defeat at the hands of a computer led him to investigate the ways in which humans can collaborate with computers to achieve better results than either could achieve alone.
GARRY KASPAROV: In many instances, human emotions are very important to change the outcome of the decision, because machines will always rely on us.
JOHN THORNHILL: That's the voice of Garry Kasparov, who spoke to me recently about both the risks and the potential for good that artificial intelligence can offer.
Garry, thank you very much for joining us on Tech Tonic. I wondered if we could start with your fascinating book, called Deep Thinking and Chess. You are arguably the greatest chess player who has ever lived. You played 2,400 games or so since the age of 12. And you only lost 170 of them. And then in 1997, 20 years ago, you played Deep Blue, and were the first reigning world champion to lose to a computer. What did that feel like at the time?
GARRY KASPAROV: For me it was not just first lost game, but [INAUDIBLE] it was first loss period of the match, serious match. Now going back and just analysing the games, and this being, from the scientific perspectives, the watershed moment was not 1997, but 1996. Though I won the first match-- I always want to remind people that there were two matches. Indeed, I won the first one. But I lost game one in this first match in Philadelphia.
And the fact that the machine succeeded in winning one game under normal tournament conditions-- again, the world champion-- signalled to all people who would understand it-- so actually I still couldn't believe it. But right now I understand that was like writing on the wall. So the rest would be a matter of time-- one year, two years, three years, four years. But it was inevitable.
And now I could be more objective analysing my performance in '96 match, in '97 matches, also in some consecutive match [INAUDIBLE] these other machines, that I did my best. I tried hard, yes. But I was in a race against time. It was just a matter of time when machines could get strong enough, and algorithms could get good enough, and databases could grow big enough for machines to win the game, the game of chess.
And I want to emphasise, it's not to solve, but to win. Because chess is not solvable. It's the number of legal moves in the game of chess is 10 to the 46th power. And again, [INAUDIBLE] is 10 to 123. But those numbers are irrelevant when you have two opponents. And while humans are always making some mistakes or inaccuracies, machine has a steady hand.
JOHN THORNHILL: And chess is a very human game. And psychology is such an important part of it when you're playing against another human. It must be extraordinarily unnerving to play against a computer that showed no emotion, gave nothing back to you at all.
GARRY KASPAROV: But every game is psychological one, whether you play chess, or you play poker, or just whatever. You name the game-- backgammon. So there's always an element of psychology there. In chess, yes, especially if you play a long match, again, it's the same [INAUDIBLE]. Yes, it's tiring, psychologically tiring, and it's exhausting.
Machine is a different story. It's not just machine is represented by another human. Technically you have a human making the moves. But at some point you realise that, unlike normal human game, it favors more the return. If you make a mistake, you're out. So just this is one wrong move and the game could be over. So there's no hope that the machine will show the same weaknesses as a human opponent, and potentially could be helping to go back to the game.
JOHN THORNHILL: And what you go on to describe in the book is that the man and machine can work together. So now the best chess is played when humans interact with the machines to play other teams. Is that a kind of parable for the world that we're moving into, do you think, that man and machine can work in symbiosis?
GARRY KASPAROV: Yes, I think chess from the very beginning served as an ideal field to test machines' ability. And it's not surprising that these great minds, pioneers of computer science, like Alan Turing or [INAUDIBLE] they all viewed chess as the ultimate test for machines' intelligence. It's quite ironic that when machine actually beat human world chess champion, it was not intelligent machines as they anticipated, but it was rather this dull, unintelligent, supercomputer.
JOHN THORNHILL: Yes, you call it at one point "a programmable $10 million alarm clock."
GARRY KASPAROV: Yes, yes. But still, it was a step in the right direction, because again, it showed what machines were capable of. And after losing this match, and also losing my hopes to play a rematch-- because IBM decided to retire the computers-- so while licking my wounds, I just thought, why not to seek the cooperation? So if you can't beat them, join them.
So it's just what about getting together and just playing the perfect game of chess? So if we could combine human intuition, human understanding of the game of chess with machine brute force of calculation and memory. And I introduced what I call advanced chess. It's a human plus machine versus other humans plus machines.
And we also learned quite a few important lessons by playing these games. One is that human plus machine will always beat a super machine, because even [INAUDIBLE] computer will compensate us for our human weaknesses, and will guarantee that we will not be making mistakes under pressure, and we will not let our silicon opponent off the hook, because then it's simple to switch to the computer, and just machine will finish the job.
But the most interesting was that when you have humans and computers playing each other-- so human plus computer versus another human; a group of humans plus computers; the most important thing is not the strength of the human player. It's not the power of the computer they use, but it's the interface. It's a cooperation.
So this is the best form of cooperation. And the superior interface is always playing a vital role in deciding the outcome. So, for instance, if you're looking for ideal team, human plus machine, you don't need a [INAUDIBLE] Garry Kasparov. To the contrary, you have to actually look for someone else.
JOHN THORNHILL: So you call this Kasparov's Law. Could you explain what you mean by that?
GARRY KASPAROV: I think it goes beyond chess. Again, chess could serve as a test field. But [INAUDIBLE] recognise that in many instances machines are doing better than we are. And when you look at the absolute strengths of top human players and machines, the gap is huge in there.
So for those who play chess, they know that Magnus Carlsen is 2,800 plus category-- 2,830 now, 2,840 he's rating. And the machines are probably 400 or 500 points stronger. So the 3,300 to 3,400, which is just insane ranks. It's the same as difference between Magnus and the national master player in the open tournament.
So if you have such a gap in strength. So you rather find someone who will recognise that in 80% or 90% of the cases you don't have to interfere with the machine. You don't have to play your own game. You have to make sure that you will just compensate for the sort of remaining deficit in the machine's code of play. Because if you're as strong as Magnus, or any tour player, you will try to play your own game. It's a psychological stumbling block. It's about your pride, and you're just [INAUDIBLE].
And the same could apply to other walks of life. So for instance, if you have a machine that is working with data in medicine, giving you the diagnosis, if you team up this machine with the top professor, who is also good, but still worse-- say, professor, can make, again, it's from the top of my head, 60% of the right [INAUDIBLE] machine, 80% or 85%-- then you will never find the best combination, because all they need is just to make sure that though there's a remaining 10%, 15%, it will be covered by human intuition without interfering with this 85% where a machine is definitely superior. So you'd rather go for an experienced nurse. It could be good operator by just following the machine and just using human intuition to clarify certain situations when machine brute force is not enough.
JOHN THORNHILL: Right. Now you subtitled your book Where Machine Intelligence Ends and Human Creativity Begins. Where is that point?
GARRY KASPAROV: It's also about psychology. Because many people today, they buy all these doomsday stories about the end of humanity, and the dominance of the machines. It's influenced by movies like Matrix, or The Terminator. And, of course, you hear a lot of these predictions coming from science, scientific world, from business world. And I think it's wrong, because future is a self-fulfilling prophecy. And if you believe it will be bleak and dark, you may end up in total darkness.
JOHN THORNHILL: So you think all these scare stories about the singularity and superintelligence are nonsense?
GARRY KASPAROV: I think they're just they are not relevant for us today. Because the biggest danger today is not what will happen with us in 100 or 200 years, and whether there will be singularity, and whether there will be Skynet, but whether this technology is gaining ground now will end up in the hands of bad guys, or Putins and Kim Jong-uns of this world. This is a real danger. So this is what we should care of, without fantasising about the very distant future.
And also, when you hear people complaining about the jobs being lost, and we are in a race against the machines, or a fight, or we were at war. But it's called progress. So I don't want to sound callous and to be accused of just being totally indifferent to suffering of those who are losing their jobs. But millions of jobs lost in manufacturing.
And we didn't hear the same outcries. The difference now that machines are going after people with college degrees and Twitter accounts. That's why suddenly it became a very hot topic. But also people, they tend to grab the benefits, and complain about the negatives. But it's part of the same package.
So yes, people live longer. That's why they want to have good jobs, [INAUDIBLE] age 50 or 60. But they live longer because of the technology. And this technology demands younger people to address these demands in the industry. So it's a very complicated circle, the problems, and trying to separate them. And protract the agony by saying, let's slow down the processes, it's not going to help. To the contrary, it's going just to create more problems, because we will not move fast enough to generate enough income to actually offer some help to those who are left behind.
JOHN THORNHILL: So you really have quite an optimistic take on the future of technology, and how man and machine can interact. Can you tell us a bit more what areas do you think where humanity is going to benefit from this symbiosis?
GARRY KASPAROV: Almost everywhere. I mean, first, machines will relieve us from some repetitive tasks, because we can see that better machines, smarter machines, they make humans also smart. Our kids, they are far more sophisticated in using these machines and getting the best out of them. So in order to maximise effect, we just have to simply recognise that we will belong to the last few decimal places.
And there's nothing wrong with it. I don't think that we should be complaining about it, and saying this is diminishing our intelligence and our integrity. Because we want to get the best result. And if you have a very powerful machine that's a powerful tool, that's a powerful supporter for your research with your activities, then the tiny tweak in channelling this massive force can help you to change results quite dramatically.
And I think that's a vast territory, where emotional response is needed to improve the code of decisions. And I think again we should recognise that anything that we do while knowing how we do that, machines will be better. So that's important. There's no way yet machines will not learn and surpass us in things that we can quantify.
JOHN THORNHILL: Right. And you focus quite a lot on the so-called Moravec paradox. Could you explain to us what that is, and how you think, in a way, that informs your optimism?
GARRY KASPAROV: Yeah, again, this is machines are good at something [INAUDIBLE] human are not, and the other way around. But now it's finding the ways you combine these two complementary qualities. And again, we'll know that there are certain things that machines will always do better. And machines are even trying to cover territories that we thought, OK, like visions.
You can start on improving machines on the territories that, according to Moravec paradox, could then be conquered by the machine. Again, because the more we learn about the way we see, we hear, we talk, so more data is available. So better chance for machines to actually imitate that, and to repeat these functions, and then even maybe even improve them.
The problem is that we still don't know-- and I don't think we'll find anytime soon-- whether our brains can function separately from our bodies. So there's still many mysteries about the way human body, human brains, human organism that are functioning. And also in many instances, human emotions are very important to change the outcome of the decision. Because machines will always rely on us, and will try to come up with the best answer based on evaluation. It could be billions of different parameters, patterns, but at the end it has to look at the end value.
JOHN THORNHILL: But the boundaries of what machines can do are kind of constantly advancing. So to take just one example, we always say that machines can't recognise human emotion. But facial recognition technology now can detect where people are scowling, or smiling.
GARRY KASPAROV: That's not emotion. That's the--
JOHN THORNHILL: But you can have that kind of competence in understanding, analysing that someone is happy or sad or angry.
GARRY KASPAROV: But sort of the folly in understanding the same facial impressions on you or me may not necessarily be the same. For instance, one situation that I always use in my presentations is that let's say machine runs your e-wallet, and the machine just has all data about your finances, your salary, your bonuses, your mortgage rate, you name it. So, everything. And you're in the store now just looking for an expensive gift. And the machine immediately signals you that this is your budget, and it's wrong.
So is machine right? Absolutely. But if you make a little alteration in the story. So you have your son or daughter next to you, and it's a birthday present. It's not quantifiable. You will never explain to the machine that is you must do because, and then it can. This "because" could have billions of different variations-- because it's important. Maybe you are still married. Maybe you are divorcing. Again, there's so many situations where your emotional reaction is very important to change the outcome of the decision.
And I keep coming up with many, many situations where you will find out that machines are always right, because they know the odds. But odds are not necessarily just offering the best outcome in decision making. And as long as we keep expanding the frontiers-- so as long as we learn new things, we can always have an upper hand.
JOHN THORNHILL: But doesn't that lead us into a dangerous territory where we invest too much authority in machines? You say machines are right. And in probabilistic terms they may well be right more often than not.
GARRY KASPAROV: Well, no, absolutely. There will be--
JOHN THORNHILL: But there are still areas, as you're saying, where humans clearly have to have an override.
GARRY KASPAROV: You can always come up with an unfortunate situation where machines recommended the wrong decision. But in big number, if you look from the perspectives of the entire humanities or the human race, machines just perform better. So that is the autopilot in the plane just does a better job.
But sometimes you have to interfere. But again, we know that in the plane the pilot's interference is what? It's 10%? So it could be 5% or 10%, the [INAUDIBLE], 5%. But at the end of the day, still, it's a relatively small percentage of time where you need human intervention to guarantee the outcome.
JOHN THORNHILL: And you don't think there's a law of diminishing returns, that the more data we have, the more complex the systems that we have, the more we build in spurious correlations. And there is a kind of diminishing return to the ability of these machines to use the data that we have.
GARRY KASPAROV: No. Again, it depends whether we stuck with the closed system, or we just move to the open-ended systems. Because as long as we are just looking for some new things-- for instance, if we go back to space exploration that we abandoned because it was too risky. So we'll just start finding new things. It's very important for us to actually recover the spirit of innovations, breakthrough innovations. Just what humans could actually feel that, that's the right way to go.
And again, even with all the data in this world, you may end up with a machine getting it wrong because of somewhere in the very beginning-- for instance, [INAUDIBLE] if we talk about [INAUDIBLE] the of AI, if there is a bug there, you will not find it, because they all just it's a deep learning process, and they keep playing the games against each other at different versions. And they collect these data. They just made these comparisons. And they come up what they believe is the better strategy-- again, based on the evaluations.
But just imagine that we just had this driving data from a thousand cars, same model, same year of production. And for some reasons the red cars perform better than the cars of other colours. Possible. Correlation and causation. For you and me, it's just accidental. So we'll definitely not make a big fuss about it. For a machine, it could be the signal that the red paint makes the machine run faster.
Now it's not tragic if you just find it at [INAUDIBLE]. The information could be stored somewhere deep down. And in a hundred iterations machine will come up with a solution based on amount others on this wrong conclusion. So those are the situations where it will need this little tweak of human intellect.
And again, it's not a favourite story, but it's a story that is unfortunately is just it's not publicised, that in 1983, probably one of the most dangerous moments of the Cold War when the Soviet officers [INAUDIBLE] basically called off retaliation after seeing five Minutemen, five [INAUDIBLE] ballistic missiles heading to the Soviet Union. And by protocol he had to initiate an immediate retaliatory strike.
And he explained it later saying that it's just he applied pure human abilities to analyse. One is if it's a first strike, you don't use five. You use whatever you have-- 500. Five is just wrong. Two, system that the early nuclear detection system, it was new and not trustworthy. And then, because of these first two, he waited for radar to come up with corroborative evidence. And it didn't show any. So that's why he came up with the conclusion that it was a malfunction. And he was right.
JOHN THORNHILL: Yeah. Thank God for his intuition. Otherwise we wouldn't be here, would we?
GARRY KASPAROV: Again for those who say, no, no, no, we'll just we'll lose it. But throughout the history of human race we have been creating machines to help us to improve our living standards, and just to get rid of some of the little primitive tasks. And we kept building these machines. And now we have something that is ready to interfere with the [INAUDIBLE].
But at the end of the day, I think we should still treat it as an important tool, and just recognise that there's still room for us. We just have to rethink it. We have to be creative to understand the long term.
JOHN THORNHILL: Final question. I want to ask you about Vladimir Putin, and the comments that he was making about whoever leads in the field of AI will be the ruler of the world. You were saying earlier that you think we have far more to worry about from stupid humans than we do from super intelligent machines--
GARRY KASPAROV: Evil humans.
JOHN THORNHILL: --evil humans.
GARRY KASPAROV: They're not stupid, by the way.
JOHN THORNHILL: What are the dangers of this technology being used in bad ways?
GARRY KASPAROV: Oh, any technology could be used for destructive purposes. So we know that. And by the way, it's much easier to build a nuclear bomb than a nuclear power plant.
And the growing danger is that these days, Putins of this world, and specifically Putin, they don't have to invent this inside their own countries. They don't have to spend billions and billions in mobilising the scientists and resources. They have to buy it from elsewhere. So that's globalisation. And Putin controls so much money that he can always go anywhere to make a lucrative offer.
So that's why I'm saying that all the Cold War slowing down the process could be counterproductive, because science cannot be stopped. People will not stop researching. And we don't want to see that some of the scientists being upset, annoyed by the lack of the response from the democratic governments. They could go elsewhere. And Putin already made it clear that he will be looking for such opportunities, just to have his hands on the technology that could offer him some new leverage.
JOHN THORNHILL: But you think we ought to have international protocols stopping lethal autonomous weapons systems. Is that possible, do you think?
GARRY KASPAROV: Unfortunately, in the modern world it's virtually it's impossible. But still you have to try to make sure that the free world control the buttons, because democracies don't fight each other. They could have a diplomatic conflict, and they can quarrel, but they don't go to wars.
Now the real danger comes from Putins, Kim Jung-uns, Iranian mullahs, and others who see that war could be the only way to prop up their power. That's why it's very important that we recognise all these complex problems, and we'll come up with kind of a strategy that will secure the development of human race and technology without handing it over to those who-- which is an irony-- who could use it, and most likely will use it to undermine the foundation of the free world where the technology was invented. Because we saw that this is always fake news, industries, [INAUDIBLE] factories, and attacks on democratic elections in Europe or the United States. They're all based on indiscriminate use of the technology invented in the free world.
JOHN THORNHILL: Thank you very much, Garry.
GARRY KASPAROV: Thank you.
JOHN THORNHILL: We'll be back next week with another episode of Tech Tonic. In the meantime, if you'd like to comment on today's show, or suggest any topics you'd like us to cover in future episodes, please email us at techtonic@ft.com. Don't forget to subscribe to our show on your favourite podcast app. And if you write a review, that will help other people find us, too. Thanks for listening. This episode of Tech Tonic was produced by Fiona Simon.
[MUSIC PLAYING]