Tech Tonic

This is an audio transcript of the Tech Tonic podcast episode: ‘Superintelligent AI — The Doomers

[MOVIE CLIP FROM ‘FRANKENSTEIN’ PLAYING]

John Thornhill
In the novel Frankenstein, written by Mary Shelley, a young, ambitious scientist called Victor Frankenstein builds a monster — an intelligent, conscious being. 

[MOVIE CLIP FROM ‘FRANKENSTEIN’ PLAYING]

In this scene in the 1931 film of the book, Frankenstein’s friends warn him that his experiment is madly dangerous. But he presses on. A thunderstorm rages in the background. Sparks fly from electric wiring, and the scientist brings his monster to life.

[MOVIE CLIP FROM ‘FRANKENSTEIN’ PLAYING]

Frankenstein’s story doesn’t end well. His creation turns on him, goes on a murderous rampage, and the scientist spends the rest of his life horrified and terrified by what he’s brought to life. It’s a story I thought about often while researching this series, because some people who work in the field of artificial intelligence today believe we could be on the verge of building a new kind of creature, a machine at least as intelligent as a human, a superintelligent computer that can think like a human, but faster and smarter in every way. 

Yoshua Bengio
Right. So first of all, you have to understand that your brain and my brain are machines. They’re biological machines. But what is going on in your brain is computation. 

John Thornhill
That’s Yoshua Bengio. He’s known as one of the pioneers of modern artificial intelligence. His research has helped pave the way for some of the most advanced AI systems around today. And he believes that superintelligent AI is coming — and sooner than we think. 

Yoshua Bengio
It’s not very difficult. If you just follow the trend of the progress we’ve seen in the last few years and ask, well, where we will be in three years, five years, ten years from now? Is there a chance we could achieve in many areas or abilities comparable or better than humans? And a lot of people in AI think that’s very likely, actually, let’s say in the next decade. 

John Thornhill
You might think that Professor Bengio, someone who spent his whole career building artificial brains, will be thrilled by the prospect. But he’s not. Unlike Dr Frankenstein, who could barely contain his excitement as his creature came to life, Professor Bengio is worried. He’s worried that, like Frankenstein’s creation, superintelligent machines will turn on their creators and could even destroy us. 

Yoshua Bengio
If it comes within the next decade or worse, within the next few years, I don’t think and many others don’t think that society is organised, is ready to deal with the power that this will unleash and the disruptions it could create, the misuse that could happen. Or worse, what, you know, I’ve started to think more about this year, the possibility of losing control of these systems. There is already a lot of evidence that they don’t behave the way that we want them to. 

John Thornhill
And do you think that could constitute an existential risk for humanity? 

Yoshua Bengio
Yes. No, I have a grandkid and I often think about him. He’s a baby. I want to make sure he has a life. When I say 10 or 20 years, you know, we might get superhuman AI systems, well, he won’t even be an adult in 10 years. 

[MUSIC PLAYING]

John Thornhill
This is Tech Tonic from the Financial Times. I’m John Thornhill, the FT’s innovation editor. 

Madhumita Murgia
And I’m Madhumita Murgia, the FT’s artificial intelligence editor. In the last year, efforts to build machines that rival the intelligence of humans — superintelligent AI — have gathered pace. But some people in the AI community are worried about the dangers this might pose. They worry it could even threaten the future of humanity itself. 

John Thornhill
So in this season of Tech Tonic, we’re asking, are we really on the verge of building superintelligent machines? And if so, how worried should we be? Should we take the rise of AI as seriously as the risks of pandemics or nuclear war?

[MUSIC PLAYING]

So, Madhu, artificial intelligence turning against us and taking over the world, that’s pretty much the stuff of science fiction. But it seems to have re-emerged as a topic of conversation in the last 12 months among AI researchers and people building AI systems. Why is that? 

Madhumita Murgia
We’ve really seen this boom over the past 12, maybe 18 months or so, of a new type of artificial intelligence that we’re now calling generative AI. And really what that is, is computer software that can create text, images, audio, video, even code in a way that’s almost indistinguishable from the outputs of humans. People may have played with these tools already. We’ve experienced them in the form of ChatGPT, Midjourney that creates pictures, Dall-E 2 that can make images also. And many of the researchers involved with creating these tools were actually themselves surprised by how sophisticated these outputs were and with the abilities of these software systems. 

John Thornhill
And we’ve seen billions of dollars being poured into these AI companies, haven’t we? 

Madhumita Murgia
Yeah, exactly. The companies that have built these so-called language models, this is the model behind, say, ChatGPT — companies like OpenAI, like Google DeepMind — they are trying to make money, but that’s just sort of one of the minor goals along the way. The ultimate goal really is to build something called artificial general intelligence or AGI, and that’s a computer system that would rival humans in terms of intelligence, reasoning, understanding. And this is in contrast to narrow’s more specific AI systems, which I’ve reported on for many years as well, and which you have also, and these are already in use today. They’ve existed over the last five to 10 years, everything from facial recognition systems, for example, to the recommendation algorithms that we see on our social media feeds or Netflix. All that is driven by more narrow AI. By contrast, AGI would be something that we could use to solve some of humanity’s most intractable problems, you know, scientific problems, climate change, healthcare and so on. 

John Thornhill
So let’s get some terms straight here. We’re talking about artificial general intelligence or AGI, which is what these companies are talking about. But how does that relate to superintelligence? 

Madhumita Murgia
So I would say that there’s a lot of debate around these definitions, even within the AI community. So they’re not hard and fast rules. But from what we know today, AGI is this ability to think and reason at the level, have a similar level of intelligence to humans. And this is seen as an important step to the much more far away, sort of still fictional, goal of creating something superintelligent, which would be more scaled up, quicker, better and smarter than any human that exists today.

I recently spoke to Sam Altman, who’s the chief executive of OpenAI, the creator of ChatGPT, in San Francisco, and I asked him about, you know, how far away he thought AGI was. And he said the research shows that his company had formed a hypothesis that they could test for what the pieces of AGI could look like, which to me felt a lot closer and more tangible really than anything they’ve ever said before. But the idea of superintelligence, this scaled up superhuman technology, that’s still far out into the distance. But if they can scale up to a level where they’re better than any humans, then that’s where people start to get worried, that we might lose control, that they might communicate with each other and we might no longer be in charge. 

[MUSIC PLAYING]

John Thornhill
If you’ve experimented with ChatGPT since it was launched last year, you might have used it to help you write an essay or draft an email, or maybe do something like submit a resignation letter in the style of Shakespeare. But for some people, playing around with ChatGPT came as a shock, even for experts in the field like Yoshua Bengio who we heard from at the start of the show. 

Yoshua Bengio
It is ChatGPT that has turned me around, where initially I was just looking at, oh, I can set it up so that it makes these mistakes. But while I was doing this, I also realised that, hey, it’s getting a lot of these things that I thought would have taken another decade at least. It’s getting a lot of these things right. 

John Thornhill
For most of his career, Bengio assumed that human-level intelligence was possible, but it would take decades, if not centuries, to reach. Now he thinks it could just be a few years away. 

Yoshua Bengio
It’s essentially nailing our intuitive abilities. So in our cognition, there are broadly two kinds of cognitive abilities. One is roughly intuition. So everything you can do without even thinking about it. And then there is, of course, reasoning and planning and all sorts of other important things. We’ve nailed the first thing and because I’ve been working for almost a decade now on the second thing, like, how do we get reasoning and attention? And essentially what corresponds to conscious behaviour? How do we get that into AI? I realised that it could be very quick. We might not be far from bridging that gap. There’s been enough progress that it could be around the corner. 

John Thornhill
And so Bengio started to wonder — if it’s possible that we’ll have AI as intelligent or more intelligent than humans in two years or five years from now, what would happen? What could happen, he concluded, is that AI could threaten the existence of the human race.

Can you explain to our listeners who I think have a hard time understanding this, how lines of code could threaten humanity? 

Yoshua Bengio
Maybe another way to phrase the question is, how could a computer become dangerous? Well, if the computer is not connected to anything and doesn’t talk to anyone, it can’t be very dangerous. But that’s not what’s going to happen, right? It’s going to be connected to the internet. It’s gonna be talking and interacting with people. And if it has a goal of preserving itself, it will not want to let us turn it off, for example. And in order to make sure you won’t be able to turn it off, it might want to copy itself in many other computers. Then we are in a territory where what they want and what we want might be in conflict. And if they’re stronger than us in ways that matter in the real world, we could be in real danger. 

[MUSIC PLAYING]

John Thornhill
So, Madhu, surely the answer here is just to create machines that are subservient to humans, to create AI that is incapable of disobeying a human command. 

Madhumita Murgia
So this is where the story really starts to sound like science fiction. Because some people in the AI world worry that this in practice would be hard to do. So first of all, the worry is that we might end up building powerful machines that threaten humans somehow by accident. For example, let’s imagine we build this super powerful superintelligent AI system and we give it the task of, say, solving climate change. It might reason that, hey, it’s the humans that are producing all the carbon, and so I should kill all the humans. And also, maybe I shouldn’t tell the humans about my plan because they just would try and switch me off and scupper my plan. So I should just go ahead and kill everyone with no warning. 

John Thornhill
All right. But why not just tell the computer to solve climate change and, by the way, we do not want you to kill everyone while you’re at it? 

Madhumita Murgia
Because it’s actually really hard to tell AI systems what to do. So a large language model like the one behind ChatGPT, for instance, the way it works is by training on billions and billions of words or data. Basically, you feed it all of the text, say from the internet, books, blogs, newspapers and so on, and then you ask it questions and see what comes out. We know how to optimise them so they get better, but really nobody knows how the system comes up with its answers or really why it does what it does. Even the companies behind these systems don’t quite know what’s going on under the hood. They can adjust controls to change the outputs, but they don’t really know why. And so at the moment this means that chatbots can come out with some really strange stuff, even harmful content, fabrications and so on that you don’t expect. So for instance, you could have racial stereotypes, you could have gender biases. All of these things are learned from the data on which these algorithms are trained. Even something more dangerous, like, for example, they might be able to give instructions on how to build a bomb or perpetrate a cyber attack. 

John Thornhill
And that’s what people in the AI industry call the alignment problem, right? 

Madhumita Murgia
Exactly. So how do you align an AI software to the desires and needs and values of human beings? But in the future, if you have a superintelligent AI that could threaten humanity, then this potential misalignment could actually end up being extremely dangerous and possibly even existential. Instead of throwing out a fabricated picture of the Pope in a puffer jacket, it’s throwing out plans to kill people. And it actually may have the agency and the power to enact those ideas. 

[MUSIC PLAYING]

Eliezer Yudkowsky
Hi. I’m Eliezer Yudkowsky. I have been working on trying to figure out how to build AI that won’t eventually kill everyone for about the past 22 years or so. 

Madhumita Murgia
Eliezer Yudkowsky is the co-founder of a think-tank called the Machine Intelligence Research Institute, and he’s pretty well known in the AI world for his warnings about a coming AI apocalypse. 

Eliezer Yudkowsky
First time somebody builds AI that’s powerful enough to actually be dangerous and kill everyone, they’re gonna screw up and we’re all going to die. That’s the way it plays out in real life. 

Madhumita Murgia
Yudkowsky says that this alignment problem is the big risk of AI because we will have no way of controlling the system’s objectives or even understanding what it wants. So the most likely outcome of building superintelligent AI is pretty bleak. Everyone on earth will be killed. 

Eliezer Yudkowsky
Because we cannot shape what they want. And if it just wants a bunch of inscrutable things that make no sense to us, then it doesn’t hate you. It doesn’t love you. You’re made of atoms that can do for something else, and you’re in the way of where it’s working. And it also doesn’t want you building other superintelligences that could actually compete with it. So there’s the possibility where it just switches everyone off so that they can’t build any superintelligences or launch any nuclear weapons that might land where it was working. It doesn’t hate you, but we don’t know how to make it actually be friends with us. We don’t know how to make it love us, and that kills us off as a side effect. 

John Thornhill
Madhu, this sounds all incredibly alarmist. Does Yudkowsky think there’s a solution to this? 

Madhumita Murgia
No, basically. He reckons that the only thing to do is to stop developing AI altogether. 

Eliezer Yudkowsky
I’ve watched the rate at which AI progresses and we’re just out of time. The capabilities, the power level of the AI is just running vastly, vastly ahead of our ability to steer it, and it is progressing faster than our ability to steer it is progressing. We are out of time for technical solutions and we need to do at this point is back off. 

[MUSIC PLAYING]

John Thornhill
OK. That’s a pretty radical conclusion. We’re essentially doomed, he’s saying, if we keep going down this trajectory. 

Madhumita Murgia
Well, Yudkowsky is known as quite an extreme AI doomer, and these ideas of a coming AI apocalypse, they’ve always been on the fringes of the AI conversation. But what’s really striking now is how in just the last few months, these fears have found their way into the mainstream. 

[MUSIC PLAYING]

John Thornhill
Some AI doomers like Eliezer Yudkowsky believe that if we build superintelligent AI, then everyone on earth will die and that the only solution is to shut down AI research completely. He even says that the specialist computer chips used for AI development should be strictly controlled, and governments should be ready to bomb the data centres of rogue states if they pursue superintelligent AI. 

Madhumita Murgia
But in the last 12 months, these fears about the AI apocalypse have found their way into the mainstream conversation. Earlier this year, there was an open letter signed by more than 1,000 tech researchers and executives, including Elon Musk and others, calling for a six-month pause on developing more advanced AI systems. Another prominent AI researcher, Geoffrey Hinton, a friend of Yoshua Bengio’s, quit his high-profile job at Google because he said he wanted to talk more freely about the risks of AI. Both were among the experts who signed a statement saying that the risk of extinction from AI should be a global priority alongside the threat of pandemics and nuclear war. 

Yoshua Bengio
Until we know better how to build superhuman AI that will be safe, we should probably not do it. 

John Thornhill
Yoshua Bengio says that the threat from advanced AI is so serious, there’s an argument for halting its development altogether. The problem, though, is that completely shutting down that kind of research would be very difficult to do. 

Yoshua Bengio
The cat is out of the bag. Companies are racing ahead, governments are racing ahead, the militaries are racing ahead, because there’s so much promise, especially economic promise. And on the military side, that is gonna be very difficult to stop that train. And so we should try to slow it down as much as we can. If we are able to stop it, great. 

John Thornhill
It’s chilling to hear the experts in the field warn about what is essentially the end of humanity caused by lines of code. But here’s the thing — not everyone agrees with this grim scenario. 

Yann LeCun
So I think that scenario is preposterous (laughter) because and, you know, using well-chosen words here. 

John Thornhill
Yann LeCun is the head of AI at the tech giant Meta. He shared the Turing Award, the Nobel Prize for computing, with Yoshua Bengio and Geoffrey Hinton in 2018. LeCun agrees that human-level AI is on its way, perhaps in a few years or decades. But he rejects the idea that it could ever pose an existential risk to humans. First of all, he argues, why do we assume that intelligent machines will want to take over? 

Yann LeCun
You know, people very often are conditioned by science fiction and think about the Terminator scenario where, you know, when a machine is more intelligent than humans, necessarily it will want to compete with humans and have to take over. And I think this is a completely wrong way of thinking because intelligence has nothing to do with a desire to dominate. It’s not even true for humans. If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither. So the idea somehow that this desire to dominate is linked with intelligence is just completely false. 

John Thornhill
And second, he argues, the idea that AI might kill us all by accident by misunderstanding its objectives or our intentions just wouldn’t happen. 

Yann LeCun
Because, you know, are we that stupid to give unlimited power to an AI system that has a unique goal without any guardrails so that it’s gonna come up with the stupid idea of killing all humans? No. If you design a system, let’s say a robot to fetch you coffee, and someone stand in its way, because the only objective of the system is to fetch you coffee, it’s gonna kill everyone on its, you know, that stands on its way that prevents it from fetching coffee. Now, again, this is a ridiculous scenario because it has to be ready to not have guardrails into the system to basically stop it from even bumping into humans or even getting close to them. 

John Thornhill
LeCun admits that current AI systems, large language models like ChatGPT, are inherently difficult to control. But he says future systems will be designed very differently with built-in objectives and rules on how to behave. 

Yann LeCun
We’ve been designing guardrails for humans for millennia. That’s called laws. The difference is that AI systems, we can hardwire those laws into their way of acting. We can’t do this with humans. So I think it’s gonna be much, much easier to make AI systems safe than it is to make humans safe. 

John Thornhill
LeCun says machines as intelligent as humans are coming. But they should be welcomed, not feared. 

Yann LeCun
It will amplify human collective intelligence, if you want. It’s like having a staff of really intelligent people working for you. We shouldn’t feel threatened by this. This is going to amplify human intelligence, similar to the effect of the invention of the printing press on humanity back in the 15th century. I’m thinking of this in terms of a new renaissance for humanity, like, the next step for humanity, really. 

John Thornhill
If you’re confused about who to believe, you should be. People working in AI are sharply divided. 

Madhumita Murgia
You have doomers on one side, including experts like Yoshua Bengio, and enthusiasts like Yann LeCun on the other. One half warning of the end of humanity and the other heralding a new renaissance. 

John Thornhill
Two really intelligent, knowledgeable people with decades of collective experience in building AI, with completely different views on the risks and what we should do about them. So I asked Bengio what he thought of his friend and colleague LeCun’s view. 

Yoshua Bengio
I don’t understand his logic. I can see some of the arguments. He is convinced that everything is gonna go fine, that we’ll find solutions to the problems as we go. And, you know, good will prevail over bad. I’m happy that he has so much confidence. But I think it’s dangerous not to look at the possibilities ahead. I’m not saying that, you know, here’s the scenario. This is how it’s going to happen. I don’t know. But I don’t see any argument, no one has given me any, like, reasonable argument to discard those possibilities. In fact, the more I read about some of the thoughts that people in AI safety have been, like, putting together about the risks, the more I think that those who say it’s just science fiction haven’t done their homework, that there are real sequences of events that seem quite plausible that could lead to catastrophic outcomes, whether it’s due to misuse or loss of control. 

John Thornhill
I feel rather sorry in a way for a lot of policymakers who are trying to make sense of this debate. Who do you think they should listen to? What are the criteria for credibility in this world? 

Yoshua Bengio
Well, anybody who makes very strong claims like, oh, it can’t happen or don’t worry, everything is gonna be good, I don’t think should be receiving as much credibility as people who are saying, I don’t know, but I think we should look into it. It’s difficult to understand some of the things we’re talking about today. It’s sometimes not just intellectually difficult, but it’s also psychologically difficult to accept that, oh no, there is this other thing and oh, it looks like science fiction scenarios, should I really worry about it? It requires us to really let go of our preconceptions and just like look rationally at the facts, at the arguments and the possibilities, and then think about what can we do. We need to pay attention and we need to better understand what can go wrong and how we can mitigate those problems, or even maybe decide collectively that we don’t want to go there, period. 

Madhumita Murgia
John, which side are you most convinced by? 

John Thornhill
For the moment, I think I’m in camp LeCun, the AGI enthusiast. Although these systems are incredibly impressive, almost magical when you use them, they’re still very limited in their capabilities as Yann LeCun describes. But I think what is unnerving about this whole debate is just the sheer unknowability of the field. We simply do not know answers to a lot of these questions. We don’t know how fast it’s going to develop and we don’t know what path it’s on. So longer term, I think we absolutely should take the concerns of experts like Yoshua Bengio very seriously. 

Madhumita Murgia
And I think what’s fascinating is that regardless of how convinced we are by these warnings of AI bringing about the end of humanity, they’re being increasingly prioritised by governments and by regulators. I was recently at the AI safety summit at Bletchley Park that the UK government convened, and what I heard there, which I thought was interesting, is even though you had these researchers who disagreed on the big existential risks like Yann LeCun, like Yoshua Bengio and others, they did come to a sort of consensus that there were clear near-term risks like disinformation during elections or deepfakes, for example, or even things like cyber attacks that they needed to figure out solutions to. But all the while, there are billions of dollars still being poured by these companies into their mission of building artificial general intelligence. And what’s baffling is that they’re doing it even as they’re warning about the risks of AI. So in the next episode of this season of Tech Tonic, we’re speaking to one of those companies and asking them, how close are we to building AGI? Why do we need it? And if there are risks, what should we do about them? 

Unnamed speaker
The challenge ahead of us is that these systems can and will allow for immense good, immense progress and a much greater degree of societal resilience if we can deal with the misuse side of them. 

John Thornhill
And we’ll also be asking, is this whole debate around the existential risks of AI a dangerous distraction? 

Unnamed speaker
So I can’t talk about whether it’s deliberate or not, but certainly it’s beneficial to them in that way to have the attention focused on these fake fantasy scenarios of existential risk. 

John Thornhill
You’ve been listening to Tech Tonic from the Financial Times with me, John Thornhill. 

Madhumita Murgia
And me, Madhumita Murgia. 

John Thornhill
Our senior producer is Edwin Lane. The producer is Josh Gabert-Doyon. Manuela Saragosa is executive producer. Sound design and engineering by Samantha Giovinco and Breen Turner. Original music by Metaphor Music. The FT’s global head of audio is Cheryl Brumley. 

Madhumita Murgia
This is the first episode in this season of Tech Tonic on superintelligent AI. We’ll be back over the next four weeks with more. Get every episode as it lands by subscribing to Tech Tonic on your usual podcast platform. And in the meantime we’ve made some articles free to read on FT.com. There’s John’s interview with Meta’s Yann LeCun, along with my reporting from the recent AI safety summit that took place in November here in the UK. Just follow the links in the show notes. 

[MUSIC PLAYING]

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Comments

Comments have not been enabled for this article.