Just over a decade ago, Andrew Ng was part of a Google Brain project that showed the power of deep learning technology.

For three days, Ng’s team fed a neural network millions of unlabelled images from YouTube videos. After training, the system could identify features such as cats in images it had not encountered before — even though it had not been explicitly taught how. This research became known informally as the “Cat Paper” and laid the groundwork for future advances in artificial intelligence. 

At around the same time, from his perch as a Stanford professor, Ng pushed into online teaching, making a course on machine learning available to anyone with an internet connection. Its popularity, along with that of other “massive open online courses”, or Moocs, at the time, led Ng and his colleague Daphne Koller to found online education provider Coursera. 

A few years later Ng moved to Baidu, the Chinese search giant, to help deepen its autonomous driving and AI research efforts. Today he invests in and builds an array of AI start-ups, runs one of his own, and continues to teach courses on AI. 

When the FT visited Ng’s Palo Alto offices, he pulled out a laptop and turned off its WiFi to demonstrate how an open-source large language model (LLM) from French AI start-up Mistral can run without needing to send data to the cloud. 

Tech Exchange 

The FT’s top reporters and commentators hold monthly conversations with the world’s most thought-provoking technology leaders, innovators and academics, to discuss the future of the digital world and the role of Big Tech companies in shaping it. The dialogues are in-depth and detailed, focusing on the way technology groups, consumers and authorities will interact to solve global problems and provide new services. Read them all here

“The model is saved on my hard disc and then it’s using the GPU and CPU [graphics processing unit and central processing unit] on my laptop to just run inference,” he said. When it was given the question of what a reporter should ask Andrew Ng about AI, the background it delivered on Ng and his work looked like the kind of response one would get from ChatGPT, OpenAI’s hit LLM-powered chatbot.  

An advocate of open-source AI development, Ng has emerged as an outspoken critic of some government efforts to regulate it. Here he speaks about AI’s current capabilities, why warnings of extinction risk are overblown, and what good regulation would look like.

Ryan McMorrow: When do you use these open-source AI models? 

Andrew Ng: I run multiple models on my laptop — Mistral, Llama, Zefa. And I use ChatGPT quite often. But for privacy-sensitive things that I don’t want to send to a cloud provider, I would tend to use them all on my laptop. Like brainstorming on really confidential projects, or helping me with writing that contains sensitive financial figures. Open-source language models are actually getting pretty good.

RM: Yet many tech companies are desperate for Nvidia’s chips to run AI. Why should they bother if the Mistral model on your laptop can handle it?

AN: This is a smaller language model: it’s only 7bn parameters, and not competitive with GPT-4 for sophisticated reasoning tasks. GPT-4 is much better at answering complex questions. But for simple brainstorming, simple facts, this is fine. And it’s sometimes pretty fast as well, as you see.

An Nvidia chip
Hot chip: sales of Nvidia’s processors have soared, as big tech companies favour them for developing AI systems © I-Hwa Cheng/Bloomberg

Training a model from scratch, though, is completely unfeasible on my laptop — that still needs tens of millions of dollars. Training that uses the massive amount of compute and inference [problem solving with fresh data] on the very large models would be beyond what I could do on my laptop.

Actually, I have done inference on a 70bn parameter model on my laptop and it is annoyingly slow. And so if you have a 175bn-parameter model, which is the size of GPT-3, that would not be something I could do on my laptop. Inference on the large models still needs data centre-level resources. 

Open-source software’s getting easy enough for most people to just install it and use it now. And it’s not that I’m obsessed about regulation — but if some of the regulators have their way, it’d be much harder to let open-source models like this keep up.

RM: How would regulations disrupt open-source?

AN: Some proposals, for instance, have reporting or even licensing requirements for LLMs. And while the big tech companies have the bandwidth to deal with complex compliance, smaller businesses just don’t.

Just as an example, I’m picturing a midsize company that wants to release an open-source model. If a lawyer in that company starts saying, “Hey, just so you know, there could be all sorts of liability if you do this,” then I think fewer companies would take that liability risk. Whatever we put more regulatory burdens on, that’s what we’ll see less of.

An open-source model is a general purpose technology: it can get used to build a healthcare app, a customer service app, a financial services app, and on and on. So if you regulate that core technology, you’re slowing everything down, and probably without making anything meaningfully safer.

RM: Framing any discussion of regulation has to be a sense of what AI is capable of — of where AI stands today. And in June you had a conversation with Geoffrey Hinton [a computer scientist who has warned of the dangers of AI] about whether AI models understand the world and it seemed like you were not fully convinced that they do. What’s your current view?

AN: I think they do. One of the problems with terms like “understands” or, going even further, “conscious” or “sentient”, is that they are not well defined. So there’s no widely accepted test for when something understands something, as opposed to merely looks like it understands something. 

But from the scientific evidence I’ve seen, AI models do build models of the world. And so if an AI has a model of the world, then I’m inclined to believe it does understand the world. But that’s applying my own sense of what the word “understanding” means. 

RM: What do you mean by a model of the world?

AN: If you have a world model, then you have a sense of how the world works and can make predictions about how it may evolve under different scenarios. And there’s been scientific evidence showing that LLMs, when trained on a lot of data, do build a world model. 

What the researchers did was basically to take an LLM and train it to model moves in the board game Othello — C4, D5, B3, whatever. And then, after Othello-GPT, as they called it, had learned to predict the next move, they asked, “Has this system learned a model of the board, and has it learned a model of the rules of the game of Othello?”. And when they probed it, the insides of the neural network appeared to have built a model of the board in order to predict the next moves. Because of that experiment, and others like it, I believe LLMs are building, internally, some model of the world, and so I feel comfortable saying they do understand the world.

A player places a piece in the game of Othello
Learn the rules, grasp the world: researchers who trained a large language model to predict moves in Othello found that it had built a (digital) model of the board to do so © Alamy

RM: Do you think they have consciousness as well?

AN: I might skirt the consciousness question, because it feels to me like a philosophical question rather than a scientific one. I think philosophers say that, as people, out of politeness, we all assume other people are conscious — but how do you know if I am conscious? Maybe I’m just a zombie, and I just act like a conscious being. So I think there’s no test for consciousness, which is why it’s a philosophical rather than a scientific problem.

RM: Leaving aside consciousness, do you think an LLM can think for itself?

AN: I don’t know what that phrase means. I’m inclined to say yes, but, because of the lack of a clear definition of what it means to think, it’s hard to say. Can a relay switch in my ceiling lamp think for itself? There is a whole spectrum [that includes] this kind of relay switch thinking for itself . . . I’d be inclined to say it does, but I think I’d have a hard time defending that in a rigorous way.

RM: When did LLMs get to the point of understanding?

AN: Understanding comes in gradual degrees. I don’t think it’s a binary criterion. But as LLMs grew, and we had GPT-2, GPT3, ChatGPT, I feel like they were demonstrating increasing levels of understanding — to the point where I feel quite comfortable saying that, to some extent, LLMs understand the world today.

RM: If it’s agreed that LLMs have the capacity to understand, the debate on AI seems to come down to optimists like yourself, who focus on what the technology is currently capable of, and doomers, who focus on projecting what the exponential advances we’re seeing will mean for the future. Do you think there’s any reason to extrapolate like they do?

AN: I don’t agree with that characterisation. A lot of the AI optimists are looking decades into the future at the amazing things we can build with AI. When I think about the AI human extinction scenarios, when I speak with people who say they’re concerned about this, their concerns seem very vague. And no one seems to be able to articulate exactly how AI could kill us all.

I can’t prove that AI won’t kill us all, which is akin to proving a negative, any more than I can prove that radio waves being emitted from Earth won’t allow aliens to find us and wipe us out. But I am not overly concerned about our radio waves leading to our extinction, and in a similar way I don’t see how AI could lead to human extinction.

RM: Yet there are well-regarded scientists who think there is some chance of that. I guess the question is: how should we as humans make sure that AI’s development doesn’t lead to our extinction?

AN: There is also some chance that is absolutely non-zero of our radio signals causing aliens to find us and wipe us all out. But the chance is so small that we should not waste disproportionate resources to defend against that danger. And what I’m seeing is that we are spending vastly disproportionate resources against a risk that is almost zero.

RM: So in terms of regulation, what, if any, do we need?

AN: We need good regulation. When we use AI to build critical applications, regulations to ensure that they’re safe and protect consumers is absolutely needed. But what I’m seeing is a lot of bad AI regulation, and we don’t need more of that.

A flying saucer hovers above a desolate alien world in a still from ‘Forbidden Planet’
Things not to come: Ng argues that worries about AI posing an extinction risk are overblown — on a par with fears that radio waves from Earth could attract hostile extraterrestrials © Alamy

RM: What is good and bad regulation in a nutshell?

AN: If someone is building a healthcare or underwriting or self-driving car application, we want it to be safe and unbiased. Taking a tiered risk approach — thinking through what are the actual risks with applications and regulating against the bad outcome — would be good regulation.

But there is this phrase going around that LLMs represent a systemic risk, and that makes no sense to me. Some governments are just asserting that LLMs represent a bigger risk, but people can build dangerous medical devices with a small language model or with a large language model. And people can build systems for misinformation with a small or large language model. So the size of the language model is a very weak measure for risk. 

A better measure would be: what is the nature of the application? Because healthcare applications will be more risky, for example. Another metric would be the reach of the application. If a social media company has 100mn users, the risk of disinformation is much bigger than a message board with just 100 users. And consequently we would regulate big tech companies more. 

This is a common practice. For example, the US’s Osha [Occupational Safety and Health Administration] laws put more requirements on big employers than on small employers. That balances protecting workers with not overly burdening small businesses.   

RM: In October, the White House issued an executive order intended to increase government oversight of AI. Has it gone too far?

AN: I think that we’ve taken a dangerous step. If we were to enshrine in the constitution that barriers to AI technology development will stop here and go no further, maybe it’s OK. But with various government agencies tasked with dreaming up additional hurdles for AI development, I think we’re on the path to stifling innovation and putting in place very anti-competitive regulations. 

RM: From what we have so far, it’s a broad outline — do we know how they’re going to implement it? For example, companies developing foundation models considered a risk to national security must notify the government and share the results of safety tests. Do we know what counts as a foundation model and how people are supposed to share information? 

AN: At this moment there are certainly lots of lobbyists who are very busy helping the government shape its perspective. The White House EO took its initial cut [on the threshold for mandatory notification] as the amount of computation needed, which I think is a very naive way to measure the risk of a model.

We know that today’s supercomputer is tomorrow’s smartwatch, so as start-ups scale and as more compute becomes pervasive, we’ll see more and more organisations run up against this threshold. Setting a compute threshold makes as much sense to me as saying that a device that uses more than 50 Watts is systematically more dangerous than a device that uses only 10W: while it may be true, it is a very naive way to measure risk.

RM: What would be a better way to measure risk? If we’re not using compute as the threshold?

AN: When we look at applications, we can understand what it means for something to be safe or dangerous and can regulate it properly there. The problem with regulating the technology layer is that, because the technology is used for so many things, regulating it just slows down technological progress. 

At the heart of it is this question: do we think the world is better off with more or less intelligence? And it is true that intelligence now comprises both human intelligence and artificial intelligence. And it is absolutely true that intelligence can be used for nefarious purposes. But over many centuries, society has developed as humans have become better educated and smarter. I think that having more intelligence in the world, be it human or artificial, will help all of us better solve problems. So throwing up regulatory barriers against the rise of intelligence, just because it could be used for some nefarious purposes, I think would set back society.

RM: How do we safeguard against the possibility of using intelligence for nefarious purposes?

AN: I think we should absolutely identify the nefarious uses of intelligence and safeguard against them. If we look at AI extinctionism, its scenarios are so vague and fantastical that I don’t think they’re realistic. And they’re also hard to defend against. 

But there are realistic scenarios. We want underwriting software to be fair. Putting in place regulations to make sure that underwriting software is audited and measured for fairness — that would be a welcome change. And with social media or even chatbot companies that reach large numbers of users, and there is a risk of misinformation or bias, transparency makes sense to me. But if we regulate businesses that impact a lot of users and therefore carry more risk, we shouldn’t also slow down the small start-ups and deny them a shot at becoming bigger businesses that then should rightfully become more heavily scrutinised.

RM: What about the potential proliferation of fake videos and other content created by AI? Is that something for the government to regulate?

AN: That’s a tricky one. I think watermarking could be a good idea. There was a White House voluntary commitment to AI [in July, when companies including Google, Meta and OpenAI pledged not to compromise safety, security and public trust in developing the technology]. And if you read those commitments carefully, I think all of them were fluff — meaning that companies could say they [were complying while] doing nothing differently from what they were already doing — except for one, which was the commitment to watermark generated content.

To guard against large-scale misinformation through videos or text — text is the important one to pay attention to — I think watermarking is something we should seriously consider. Unfortunately, since that White House voluntary commitment, I’ve seen companies step back from watermarking text content. So I feel that the voluntary commitment approach is failing as a regulatory approach.

RM: It seems like the voices urging tighter regulation are a lot louder and maybe more numerous than those arguing the opposite. Do you feel like you’re alone in pushing for the hands-off approach? 

AN: Actually I would love for governments to be hands on and to write good regulation, as opposed to the bad regulatory proposals we’re seeing, so I’m not advocating hands off. But between bad regulation and no regulation, I’d rather see no regulation.

Unfortunately, there are massive forces, including some very large companies, that I think are overhyping the risks of AI. Big companies would frankly rather not have to compete with open-source AI. And unfortunately the recipe is to hype up fears, and then try to put in place regulations to slow down innovation and slow down open-source.

Look at what we just did on my laptop: open-source software is surprisingly competitive with some of the proprietary [models] — not with the best, but with some of the earlier versions. And it’d be very convenient for some companies not to have to compete with this.

RM: Can you name some companies that are pushing this AI threat narrative?

AN: You can imagine [who they are]. Multiple companies are overhyping the threat narrative. For large businesses that would rather not compete with open-source, there is an incentive. For some non-profits, there is an incentive to hype up fears, hype up phantoms, and then raise funding to fight the phantoms they themselves conjured. And there are also some individuals who are definitely commanding more attention and larger speaker fees because of fear that they are helping to hype up. I think there are a few people who are sincere — mistaken but sincere — but on top of that there are significant financial incentives for one or multiple parties to hype up fear.

I don’t think I’m alone in feeling this way. Bill Gurley [general partner at venture capital firm Benchmark] has been very thoughtful about regulatory capture. He gave a talk that’s on YouTube, a fantastic talk, he accurately predicted a lot of regulatory capture moves that are being played out now. [Computer scientist] Yann LeCun has been speaking about this as well. I think there are actually quite a few people with a very thoughtful perspective on this.

RM: It just seems like the other side is louder. 

AN: And frankly, you’re the media. So you can help. “If it bleeds, it leads”, and similarly for fear as well. When lots of people signed [the Center for AI Safety statement] saying AI is dangerous like nuclear weapons, the media covered that. When there have been much more sensible statements — for example, Mozilla saying that open source is a great way to ensure AI safety — almost none of the media cover that. 

Tech investor Bill Gurley
West is best: tech investor Bill Gurley has said that Silicon Valley’s success in innovation owes a lot to its being so far from regulators in Washington, DC © David Paul Morris/Bloomberg

I think that Cais move was one of the most unfortunate things, because a lot of regulators are confused about AI. The statement that when you think about AI, you should think about nuclear weapons — that message, that misleading message, came through loud and clear, and distorted the thinking among the regulators. I saw the impact it had in DC.

I see no reason to make an analogy between AI and nuclear weapons. It is an insane analogy. One brings more intelligence and helps make better decisions, and the other blows up cities. What have these two things to do with each other? 

The risk of regulatory capture is starting to dawn on more nations, because a lot of generative AI talent is concentrated in the US today and one of the best ways to make sure that cutting-edge technology is widely disseminated is open source. If regulations come up that hamper dissemination of open source, guess who will be left behind? Pretty much everyone other than the US.

There are forces in the US that would like to see that happen because of the perceived risk that the US’s adversaries are benefiting from open source. But while no one wants to see AI used to wage an unjust war, I think the price of slowing down global innovation, of letting there be less intelligence and poorer decision-making all around the world, is too high a price to pay. I hope the European regulators figure this out, too, because, frankly, who will be left behind if we slow down open source?

RM: After reading Kai-Fu Lee’s book AI Superpowers, which came out in 2018, I was convinced that China would be the one leading the way on AI development. But that doesn’t really seem to have happened. Why do you think that is? 

AN: Kai-Fu made an argument about China’s access to data. But data is very verticalised — data is not a single, featureless glob of things that you just want more of. For example, while Google has tons of web search data, that data by itself is not very useful for logistics, or smartphone manufacturing, or drug discovery.

And different countries will have data in different verticals that they can leverage to their own advantage. China is cyclically ahead of the US in its application of surveillance technology, and also because of the rise of digital payments in China. But I think that the US, in the industries where it is strong, has lots of its own data, and has assets to maintain its strengths — sectors like web search, drug discovery, pharmaceuticals. 

Technology comes in bursts. China just missed the beginnings of the generative AI wave. A lot of the early generative AI work — and even now, frankly — was done by two teams: Google Brain, my former team, and OpenAI, which both happened to be here in Silicon Valley. And sometimes people leave these teams and start other companies, which is why at this moment I see a very heavy concentration of deep tech talent in generative AI in Silicon Valley and nowhere else. There are pockets of talent, in the UK, in Canada, in China, but it’s actually very concentrated just in Silicon Valley. Even Seattle, say, and New York have much less generative AI talent than Silicon Valley.

RM: Beijing has been taking a pretty aggressive approach with regulating AI. Any LLM that a company wants to release to the public has to be approved, and the authorities want to look at what data sets it has been trained on. Do you think that’s going to stifle innovation? 

AN: I’m not an expert on this. I think the implementation of these regulations will have a big impact on how much they are or are not stifling. 

RM: Recently, Tencent and Alibaba have been talking about their lack of access to Nvidia chips as a possible constraint on their development of AI. What do you think about the US’s approach?

AN: US export controls on semiconductor chips are definitely having a meaningful impact on China’s AI development. I’m seeing a lot of innovation in China on getting things done without access to the advanced Nvidia and AMD chips — innovation on how to do inference on LLMs using lower power chips. And at least right now, the export controls have enough loopholes that I think companies in China are sometimes trying to find their compute overseas. So how it all shakes out remains to be seen. 

RM: Do you think the US should be doing this — essentially trying to hobble their industry?

AN: I have very mixed feelings about it, but I’m not an expert in geopolitics. But one thing that is clear is that the US is trying to regulate foundation models — there is definitely a faction in Washington, DC, that is influencing things like the White House executive order because of fears about the US’s adversaries getting access to open-source software.

For the software layer, I think the US is hobbling itself as much as anyone else. And that’s a mistake. The hardware I’m less of an expert on.

RM: It sounds like you think that Chinese companies can find ways around these chip constraints.

AN: I’ve seen a lot of innovation to get things done despite the chip constraints, right. 

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments