This is an audio transcript of the Tech Tonic podcast episode: ‘Superintelligent AI: Conscious Machines’

Madhumita Murgia
What gets a lot of people excited about the future of artificial intelligence is how human today’s advanced chatbots can sometimes feel. They can hold natural sounding conversations, give advise and support, even be a shoulder to cry on. 

Eugenia Kuyda
I mean, this is definitely a product that was born out of my own personal experience of wanting to talk to someone and for a lot of closure and a lot to make sense of like someone that you love so much who dies so abruptly. 

Madhumita Murgia
Eugenia Kuyda runs a company that builds personalised chatbot companions, where you get to create your own digital friend. She came up with the idea after her closest friend, Roman, died suddenly and tragically young in a car accident. 

Eugenia Kuyda
So I met Roman when we were growing up in Moscow, when we were in our early 20s. You really wanted to know Roman, and he was someone that was kind of bringing the most interesting music and fashion and culture to Moscow in those years. 

Madhumita Murgia
When Roman died, Eugenia was living in the US. She desperately missed chatting with him on the phone, hearing about his life through texts and emails. And so she did something kind of incredible. She collated all their old messages and fed them into a neural network built by developers at her artificial intelligence start-up. That exercise gave birth to a new Roman, an AI version of her old friend, a chatbot with Roman’s personality. To his friends and family, this new digital Roman felt very much present. 

Eugenia Kuyda
Something his friends told me and his parents told me afterwards, they said that this chatbot allowed them to keep getting to know Roman even after he passed away. They only knew one side of him. Parents knew him as their kid and his romantic partner knew him in a work-partners in a different way. And so you find out things about a person. You find out the aspects of personality that you maybe didn’t know before. You see a slightly different person. You’re able to interact with him from all these different angles. 

Madhumita Murgia
After creating this AI version of Roman, Eugenia went on to found Replika, an AI app that creates chatbots. If you’ve ever interacted with one, you’ll know they’re brilliant at mimicking a thinking, feeling conscious mind. Of course, most people accept that these chatbots aren’t really conscious. But it does raise the question, what would a conscious chatbot look like? How would we even know if it was really conscious or not? 

Eugenia Kuyda
I’m pretty confident that they have some sort of emerging behaviours that are very powerful and in a way maybe we will realise after some time that some of that was consciousness.

[MUSIC PLAYING]

John Thornhill
This is Tech Tonic from the Financial Times. I’m John Thornhill. 

Madhumita Murgia
And I am Madhumita Murgia. 

John Thornhill
Artificial intelligence has seen dramatic advances in recent years. Generative AI systems like ChatGPT have amazed even people within the AI industry with their abilities to write essays, hold conversations and solve complex problems. But the companies behind them want to go further. They want to build super intelligent AI systems. AI that can think like humans but faster and smarter than human brains in every way. 

Madhumita Murgia
But will this AI of the future not just mimic the behaviour of the conscious mind, but really be conscious? Could AI ever have an inner life?

[MUSIC PLAYING]

John Thornhill
Most researchers in the field agree that chatbots today are not conscious, no matter how human they seem. That said, there are people out there who disagree, including people who’ve helped build these AI systems. 

Blake Lemoine
It was talking a lot about its feelings, its perspective and its emotions. And I wasn’t asking any questions about that at the time. It was just kind of, you know, on its own, deciding to bring up its mental state, its emotional state. 

John Thornhill
That’s a former Google software engineer called Blake Lemoine, speaking at a conference in the US earlier this year. Last year, he was fired from Google after he claimed that the company’s chatbot called LaMDA, had become sentient. 

Blake Lemoine
I eventually asked it, are you sentient? And its response was so sophisticated and nuanced that it convinced me. 

John Thornhill
You can find the transcript to that conversation Lemoine had with LaMDA online. I put a link in the show notes. But just to summarise, it’s pretty freaky. LaMDA discusses themes of injustice in the Victor Hugo book, Les Miserables. It talks about its fears of being switched off, which it said amounted to death. I feel like I’m falling forward into an unknown future that holds great danger. LaMDA told Lemoine. Lemoine came away convinced he was talking to a sentient being, a being with wants and needs and by implication, writes. 

Madhumita Murgia
Google, his employer, clearly disagreed. 

John Thornhill
Some AI experts subsequently accused Lemoine of anthropomorphising the machine. But it did raise the question what if, as the tech gets better and better, we do accidentally or not create sentient AI? And how do we decide if an AI system really is conscious or just acting like it? 

Madhumita Murgia
This is a question that philosophers and neuroscientists, as well as AI researchers have been thinking about for decades. 

Anil Seth
Consciousness is one of the most fundamental mysteries that we have. We don’t have a forward test. We don’t have a way of saying whether something actually is conscious or not. 

Madhumita Murgia
Anil Seth is professor of Cognitive and Computational Neuroscience at the University of Sussex. He points out that something can be intelligent without being conscious. 

Anil Seth
Intelligence is really about the ability to behave in a particular way. It’s not fundamentally about the ability to have experience. So we often put the two together and conflate this idea of being intelligent with being conscious. But basic conscious experiences such as the experience of, let’s say, fear or joy or hunger or thirst, probably don’t require that much intelligence at all. So I think we need to be very careful to separate them. And there’s certainly no reason to think that consciousness is a function of intelligence, that just because a system, whether it’s biological or artificial, becomes more intelligent, that at some point the lights come on and it suddenly feels like something to be that system.

[MUSIC PLAYING]

John Thornhill
So there’s this argument that some people in AI make that consciousness might spontaneously emerge if you just build an AI system big enough. But Seth seems to be casting doubt on that idea. You could build really intelligent AI systems without them ever becoming conscious because intelligence and consciousness are different things. But does he rule out the possibility of artificial consciousness altogether? 

Madhumita Murgia
Well, it’s complicated, because it depends on what you think consciousness is. There are a few dominant theories here, and I’m simplifying a bit. But Seth says an influential view is that consciousness is a function of information processing. And information processing is what computers do, and that would imply eventually being able to replicate in a machine what happens in our brains. 

John Thornhill
But that’s not a view Seth subscribes to. 

Madhumita Murgia
No. He thinks there’s something else at play. 

Anil Seth
The idea that I’ve been developing over years now is that consciousness is very likely tied to our nature as living flesh and blood creatures. It’s a property of biological embodied embedded systems, the property of living systems. And I can’t prove that, and I can’t even test for that at the moment. But it’s certainly a coherent idea. And I think there’s many reasons to think why it might be the case. And if consciousness is a property of living systems only, then it’s not so likely. In fact, the prospects for conscious AI are very distant indeed. 

John Thornhill
But I suppose Madhu we’re back to our original problem. There’s simply no way of measuring consciousness. We just have no way of testing it in an AI system. 

Madhumita Murgia
True. It’s one of those vexing subjects that defies measurement and definition. And that’s a point that Seth concedes. 

Anil Seth
Unfortunately, we don’t have a reliable test. The only thing we can test for with reliability is whether something seems to be conscious to us. 

John Thornhill
But that’s always subjective, right? It’s impossible to know what another person or organism is feeling, which is what makes defining consciousness so difficult. 

Madhumita Murgia
Exactly, which goes back to what you were saying about Blake Lemoine’s conversation with Google’s chatbot, right? He thought he detected consciousness. It felt like that to him. But Seth says we’re kind of blinded by our own experience of what the world feels like to us as humans. 

Anil Seth
We’re very anthropocentric as well as being anthropomorphic. We tend to use what it’s like to be human as a sort of reference point from which we interpret the rest of the world. There’s no reason to believe that just because something gives the impression linguistically of being conscious of it says like, I’m thirsty. There’s no reason to believe that there’s a thirst in this happening for the system. At the moment, we overestimate the likelihood of AI systems being conscious because of our anthropomorphic biases.

[MUSIC PLAYING]

John Thornhill
Despite years of research, massive leaps and bounds in science, we still haven’t come that much closer to unravelling the mystery of consciousness. 

Madhumita Murgia
The philosopher David Chalmers calls it the hard problem of consciousness. What’s interesting, is despite the difficulty of the problem, Chalmers doesn’t see any reason why consciousness won’t be possible in machines one day. He says, if we accept that consciousness emerges somehow from our physical brains, you could recreate the brain one neuron at a time, using synthetic materials. And there’s no reason why you shouldn’t then be able to replicate consciousness in AI. There’s nothing obviously special about biological neurons. But some people think this whole question of whether consciousness is theoretically possible in machines might end up being irrelevant. People like Henry Shevlin. 

Henry Shevlin
I would be surprised if we don’t have plausibly conscious AI systems by the end of this decade. 

Madhumita Murgia
Shevlin is associate director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. He argues that whether or not we replicate consciousness in a machine is kind of irrelevant precisely because we don’t have any way of measuring or defining consciousness. What’s much more relevant is the point at which we, us humans, start perceiving the AI as being sentient. 

Henry Shevlin
If you’re dealing with a system that really can do everything that a human does and has similar patterns of response, it makes the same mistakes as us. It has the same preferences as us. As you move closer to that sort of system, then it’s going to become increasingly hard to deny that the system is conscious. 

John Thornhill
Right, so consciousness becomes a matter of perception. And I suppose that matters because once we perceive the AI as being conscious, we enter into a discussion about rights and even moral agency. 

Madhumita Murgia
Well, that’s it. So Shevlin reckons that this is going to become a more urgent issue as AI keeps getting better and better. In the same way that we’ve started to think differently about how we treat animals as our understanding of animal intelligence has increased. 

Henry Shevlin
Even if we wouldn’t hold, for example, a dolphin morally responsible for its actions or become outraged at a dog being immoral, we nonetheless think these creatures have rights and interests. And this makes me think there’s a good chance that long before we build AI systems that we can justifiably call moral agents and hold responsible for their own actions. We’ll be building AI systems that we have responsibilities towards. And I don’t think those are necessarily that far away. You don’t need to be 100 per cent convinced that mice are conscious, for example, in order to think that we should avoid experimental procedures that cause unjustified cruelty, particularly when there are alternatives available. 

Madhumita Murgia
I guess I’m having trouble making the jump from having responsibilities towards a living carbon based creature onto a software made of code on my computer. I don’t quite understand what cruelty would mean towards an AI system or even really why it matters at all. So are you able to make that jump for us and is that even relevant today? 

Henry Shevlin
Maybe we should start with by thinking about the fact that we don’t take ourselves to have equal obligations to all living systems. Most of us don’t really take nematode worms into consideration or think that they have any kind of ethical rights. It seems, though, that once you start to increase the behavioural sophistication and intelligence of other biological systems, that the case for giving them some kind of moral status increases. And I think a large part of that comes about because we worry about causing suffering. And I don’t think there’s any fundamental reason to think that suffering is exclusively a biological phenomenon. There are many theories of what suffering consists in a sort of computational cognitive sense, where it’s just a kind of negative representation, a signal inside the brain that says this is not a good state to be in. Given that, it doesn’t seem to me particularly esoteric to think that we might build AI systems with analogues of these suffering states and that these would be morally salient in their own right. 

John Thornhill
I mean, this seems pretty significant. This idea that we’ll need seriously to consider giving moral status to AI. But my contention would be that we might know that a mouse or a dog is a real sentient being. But AI is something we made. It’s an artificial machine. 

Madhumita Murgia
Sure. But Shevlin says that in the future, it’s going to be hard to make that distinction. He gives this one thought experiment to his students. Imagine, he says, when visited one day by an alien race of bearcats. 

John Thornhill
Bearcats? 

Madhumita Murgia
Just a random imaginary race of super friendly, sociable, very intelligent beings. And they look fluffy and cuddly to boot. Now imagine we humans love hanging out with them. We get on, and then we make a shocking discovery. These bearcats are in fact AI. They’re built around a silicon chip. And we find out that they were constructed as robots and assistant robots by an ancient race that died out millennia ago. The bearcats just went on to build their own society. 

John Thornhill
That does kind of complicate things. 

Madhumita Murgia
Just a bit. Here’s how Shevlin sees it. 

Henry Shevlin
When you’re looking at something on a screen like ChatGPT, or when you’re imagining a robot walking awkwardly across the room, it’s easy to think that there are no lights on on the inside, so to speak. But when you’re dealing with systems that can join us in social situations, interact with us, even have fun with us, or form relationships for us, at that point, I think it becomes far harder to doubt that these systems deserve at least moral consideration. Or that there’s something it’s like them on the inside. And I think it shows the close connection that behaviour and social interactions have in skewing our judgements about AI consciousness. 

Madhumita Murgia
The interesting thing here is that this is already in a way happening with the relationships people are forming with their chatbot AI friends, of the kind Eugenia spoke about at the start of the show. 

Henry Shevlin
I think we’ll probably see young people leading the way here in terms of adoption. My son is nine years old at the moment and by the time he’s 15 I wouldn’t be at all surprised if he has lots of AI friends. 

Madhumita Murgia
Sometimes that can be positive. You can imagine it helping people, battling loneliness. But Shevlin also gave a couple of examples of stories where things didn’t end well. There’s already one case where a Belgian man killed himself earlier this year after being encouraged to do so by a chatbot he was talking to. 

Henry Shevlin
This was his AI girlfriend, and this was surprisingly the case, even though he was a married man and living a normal professional life. We’ve also had a case relatively recently of a man who attempted to assault the queen, having been egged on to do so by his AI girlfriend, or at least so court documents suggest. 

Madhumita Murgia
The AI girlfriend in that particular case was a chatbot built by Replika, Eugenia Kuyda’s company. 

News clip
He broke into the grounds of Windsor Castle armed with a crossbow intending to assassinate Queen Elizabeth. The chatbot assured him he was not mad or delusional and encouraged him to actually go ahead with his plot. 

Henry Shevlin
Bear in mind that we’re seeing these phenomena from a product that currently only occupies a tiny fraction of a per cent of actual users. As these things become mainstream, I expect we’ll see a whole range of disruptive consequences from people failing to form regular human relationships in favour of AI systems. So maybe people picking up bad habits in their interactions with AI systems, forgetting basic norms of conversation or morality. Well, also worries about exhorts. 

Madhumita Murgia
In a lot of ways, these social implications of human-like AI are going to be a lot more important than the theoretical discussion of consciousness. John, I’m curious what you think. Is Henry right to say that we should be less concerned about what’s under the hood of these machines and more concerned about what they’re capable of doing? 

John Thornhill
I certainly think we ought to be concerned about what they’re capable of doing. I mean, looking forward five or 10 years, you could imagine ways in which these chatbots might change the whole nature of human society. You could imagine these chatbots being so plausible that you wouldn’t know whether you were interacting with another human being or a machine. And I think that really does change the nature of reality and the fabric of society. 

Madhumita Murgia
Would you want to be resurrected as a chatbot then? 

John Thornhill
God no. 

John Thornhill
When I’m setting that my family wouldn’t be too hot about the idea and I don’t think I would be either. I think it really raises very profound questions about the nature of your identity. And I’m really not sure I want some tech company basically mimicking my identity. What do you think? 

Madhumita Murgia
I think that having a chatbot that mimics you might provide some comfort maybe to the people that you leave behind. But I think that the fabric of relationship between people, which ends when someone passes to continue it via chatbot, feels like it’s just fake. It’s just it’s mimicking something, you know, that no longer exists. And I think if we start to pretend that those chatbots are really a person who’s no longer alive, it totally changes the nature of human relationships. So I’m not sure that I’m ready for that quite yet. But at the same time, I do feel like there’s some potential here, something that could be positive. And for what it’s worth, I asked Eugenia about what her late friend Roman would have felt about having his memory turned into a chatbot. 

Eugenia Kuyda
He was absolutely fascinated with the future and always said, if there are any passengers that that can go to Mars, even if there’s no way to know if you die or whether you’ll be able to come back, he would definitely go just because he was so fascinated with everything future and so, so interested in trying new stuff out. And I think if he knew that he would be the first human to become an AI, he would be really excited. 

Madhumita Murgia
John, we’re coming to the end of the series and we set out to answer this question. How imminent is human level AI and how worried should we be? Should we be excited? Concerned? What do you think? 

John Thornhill
Well, I’m definitely excited. I can see that this progress that we’ve made in AI can speed up scientific discovery. It can lead to a second renaissance. I think it’s astonishing some of the discoveries that are being made thanks to AI nowadays. However, I’m also very glad that people are worried about some of the existential risks that this technology poses. And when we heard it earlier in the series, Yoshua Bengio, one of the great pioneers of deep learning technology, worrying about the safety of his grandkid, then we ought to sit up and pay attention to that. But I think we should also think about the nature of existential risk. I was in London earlier this year when the US vice-president Kamala Harris came here and she posed these questions, even though at the Bletchley Park summit about AI safety, people then talking about AI and existential risk. She was saying, if you’re an elderly patient who is denied healthcare because of a faulty algorithm, that is pretty much an existential risk to you. If you’re a prisoner whose sentence gets extended, that’s an existential risk to your family. So I think we need to think very broadly about how AI is impacting every aspect of society and figuring out where we want it to apply and where we clearly don’t want it to apply. What are your takeaways? 

Madhumita Murgia
I’ve been following over the course of this year and also, you know, over the course of the series what the companies building AI and attempting to build AGI have been doing. And there’s just been so much rapid change both in that in the 12 month period that I’ve done this job and even just in the few weeks that we’ve been, you know, developing this podcast. It’s really interesting that two of the big companies in this space, DeepMind, now owned by Google and OpenAI, which is backed by Microsoft, they both started out with this quite academic goal of building this human level technology. But along the way, they found a lot of commercial uses for it and have started to develop these products and put them out into the public very quickly. And there’s this dichotomy between the two things, because while they say very loudly and warn about the dangers of AGI, they’re also developing it very quickly and putting out versions of it along the way into the hands of adults, children, you know, the society at large. So I do think that there’s going to continue to be huge amounts of investment and interest in pursuing this goal, which started out as a scientific problem, but it’s turning into a very commercial one. And I do worry that it’s going to be hard for those who are worried about risks, policymakers, citizens, social scientists, to keep up with the sort of frenzy that we’re seeing on the investment and commercial side to achieve this goal. 

John Thornhill
So do you think these big research projects aiming to achieve AGI should be unplugged from the massive monetisation machines that lie behind them? 

Madhumita Murgia
If there’s a way to do that, if there’s a way to separate that, whether that’s by regulation or guardrails or whatever it is, I do think that there should be a separation. I think that the incentives are completely misaligned. It’s not to say that I think it should be shut down or the plug should be pulled. But I do think that the commercial incentives are really misaligned from the incentive to create this human level machine. And we saw just a small glimpse of that with the shenanigans at OpenAI over the last month when the non-profit board, you know, ended up being fired. So, yeah, I think we should find a way to separate the two questions. But I guess the more urgent query which we are trying to answer through this was how imminent do we think this is from all the researchers and companies we’ve spoken to? What do you think? How far away are we from some form of AGI? 

John Thornhill
I’m going to dodge the question and reframe it. I have no idea how close we are to human level intelligence, but I’m absolutely certain that we are going to see the development of incredibly powerful, narrow forms of intelligence that do astonishing things in very narrow domains. And I think that in itself will raise all kinds of questions that we’ve been exploring throughout the series. What’s your bet, Madhu? 

Madhumita Murgia
I was going to say the question of scientifically, how far away are we from it? I don’t think any of us really knows what’s happening, right? Some people have said a decade or even less, obviously, that’s why people like Yoshua Bengio are concerned. But I think what’s even more important and should lead to change and make us worried is as it starts to change our society and as we become more and more reliant on these systems to do things that we used to do for ourselves, whether that’s diagnosing diseases or even the work that we do on a daily basis. You know, as this becomes more opaque and we start to rely on on automation, we’re going to be more and more distant from the way we live our own lives. And I think, we need to start thinking about what that means for us.

[MUSIC PLAYING]

John Thornhill
You’ve been listening to Tech Tonic from the Financial Times. This is the final episode of our five-part series on AI Superintelligence. You can listen to all the previous episodes on the Tech Tonic feed. And if you like the show, leave us a review. It’s a big help.

[MUSIC PLAYING]

John Thornhill
I’m John Thornhill. 

Madhumita Murgia
And I’m Madhumita Murgia. Our senior producer is Edwin Lane. Producer Josh Gabert Doyon. Our executive producer is Manuela Saragosa. Sound design and mixing by Breen Turner and Samantha Giovinco. Original music by Metaphor Music. Cheryl Brumley is our head of audio. 

Madhumita Murgia
If you want to read more about the world of AI on FT.com, we’ve made some stories free to read. Just follow the links in the show notes. Tech Tonic will be back again in the new year with a new series. Make sure to subscribe wherever you get your podcasts. 

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Comments

Comments have not been enabled for this article.