About this transcript: This is a full AI-generated transcript of How the Musk v. Altman lawsuit could shape the future of AI — Global News Podcast, published May 1, 2026. The transcript contains 1,885 words with timestamps and was generated using Whisper AI.
"Welcome to the Global News Podcast from the BBC World Service. I'm Will Chalk and today we are talking to Mark Chislak, our AI correspondent, because of the court battle that's been called a battle between Godzilla and King Kong. Now I'm not sure if the person who coined that phrase worked out who..."
[0:00] Welcome to the Global News Podcast from the BBC World Service. I'm Will Chalk and today we are
[0:05] talking to Mark Chislak, our AI correspondent, because of the court battle that's been called
[0:09] a battle between Godzilla and King Kong. Now I'm not sure if the person who coined that phrase
[0:14] worked out who was who in that scenario, but it is Elon Musk and Sam Altman. And this is all to do
[0:20] with the founding of OpenAI 10 years ago. So take us back. Yeah, it's a tale as old as time,
[0:26] or at least a tale as old as a decade. A tale between two tech bros, Musk and Altman. Now Musk
[0:33] is suing OpenAI and its co-founder, Greg Brotman and Sam Altman, as well as Microsoft. And he's
[0:40] claiming that he's been misled or that he was misled, effectively swindled out of something like
[0:47] $38 million in donations. He argues that the company, that OpenAI has abandoned its founding
[0:55] principles, that it set out as a not-for-profit, had a non-profit mission. And it was to ensure
[1:03] that the company, if it ever managed to develop artificial general intelligence, AGI as it's
[1:08] known, that this will be done for the benefit of humanity. Now he's asking for billions of dollars
[1:16] in wrongful gains. And he even wants a major restructuring of OpenAI. I think he's also asked
[1:22] for Sam Altman to potentially be removed, which would be a very, very big move. Now OpenAI rejects
[1:29] all of these claims. It's saying that the shift towards a for-profit structure was agreed in 2017,
[1:36] and that Musk himself pushed for control of the company at the time, something that OpenAI's board
[1:43] actually refused. Now the case is being heard in federal court in California before a nine-person
[1:48] jury. And it could run for up to something like a month. Now a lot of people who use AI will use
[1:53] it for everyday things, you know, almost like a glorified search engine at the moment. But actually
[1:57] at the core of this argument is the ultimate purpose of AI. And also this push for general
[2:04] intelligence, this AI that will surpass human intelligence. So this isn't just a dramatic
[2:09] battle, is it? There's quite a lot at stake here. Yeah, yeah, yeah. I mean, it isn't just about
[2:15] money. It's about that race towards AGI, artificial general intelligence. And this is where technology
[2:22] broadly becomes smarter than people and has capabilities beyond human capabilities. A lot
[2:30] of the tech bros are selling it as it's just around the corner. It's just around the corner. Just
[2:33] give me five billion more dollars to invest in this and we'll achieve it. And then you have some really,
[2:39] really noted experts in the field who say it's a long way away from being developed. And that some of
[2:45] the technologies at the moment, things like large language models, things like chatbots and stuff
[2:49] like that, that they are perhaps a technological cul-de-sac. And they're not the right way to go
[2:55] if we want to see this technology which arrives, which is smarter than we are.
[2:58] But practically it matters as well, because that theoretically is the moment when we start living
[3:03] in a sci-fi movie, when we get AGI, right? And AI starts to be able to do all these things.
[3:07] We've got to, but then you've got to think about society, as far as society is concerned,
[3:12] you've got to think about, is it desirous to have the society or have a society where so many aspects
[3:19] of it are automated? What do you get from having a completely automated society? What does that mean
[3:25] for employment? What are people going to do? And you have to start asking yourself and governments
[3:31] have to start asking themselves, and we as a society have to start asking ourselves,
[3:34] is that something that we want? These are really big philosophical questions.
[3:40] And to bring this back to the trial then, OpenAI's argument is that Elon Musk is essentially using
[3:45] this trial to try and knock them out the water so he has more power in the AI world.
[3:51] And it's this idea that you've got these billionaires competing for what will eventually
[3:55] be this incredibly powerful technology that they're going to have control over.
[3:59] I don't know if you saw the banner that was held up outside the courtroom by a protester.
[4:04] It said, everyone sucks here. Musk equals Altman.
[4:09] I can see why people who aren't keen on the tech industry might think that. And they think of
[4:16] tech billionaires as all being one homogenous group of people who are in control of very,
[4:23] very powerful technologies and are guiding the way that society is evolving in certain respects.
[4:29] And there's a school of thought that says, yeah, that's true. They wield an enormous amount of
[4:35] power over society. And they have connections to and relationships with governments across the world.
[4:42] And as a consequence of that, what they want to happen, as far as their business is concerned,
[4:46] tends to tends to happen. Or at least when we think of things like regulation,
[4:52] a lot of companies, a lot of countries around the world or the governments around the world are taking a very soft touch
[4:57] as far as regulation of technologies like AI is concerned. And that's leading to the speeding up of
[5:05] the development of the tech, if you like. But it also comes with significant risk because in the race
[5:11] to make this technology, in the race to try and realise AGI or try and at least get the technology,
[5:17] make it even more powerful than it currently is, there's a lot of risk of the things that can go
[5:22] wrong and the things that get broken along the way. And the effect of that could be quite dramatic
[5:28] and be quite catastrophic as well. You mentioned regulation there. Is there a sense that maybe in
[5:34] the absence of any extremely strong regulation from governments that the tech companies, the AI
[5:39] companies are actually using the courts to slog it out amongst each other and almost regulate
[5:44] themselves and work out who comes out top doc? I think there's a slightly different question
[5:48] around regulation and what's going on with this court case. When it comes to the notion of regulation,
[5:54] the tech companies, and they have said this to me, you know, very, very senior executives from
[5:59] a lot of these companies have said to me that they frame the race towards AGI or they frame the evolution
[6:04] of this technology as thinking about it in terms of global dominance. Who do you want to be globally
[6:10] dominant as far as this technology is concerned? Do you want it to be broadly the West or do you want it
[6:16] to exist in China? Do you want the Chinese state to have absolute control over this technology,
[6:23] which has enormous potential to affect all of our lives? Or do you want that to exist in the West
[6:28] and predominantly in the United States? And as a consequence of that, they have argued that stronger
[6:34] regulations restrict the development of the technology in the West. It's regulations that
[6:39] won't be restricting companies in China and that the Chinese state isn't concerned with those kind
[6:44] of regulations. So as a consequence of that, that the Chinese state might be able to catch a lead
[6:49] on the West and it might be able to end up being the dominant power as it were, as far as, as far
[6:54] as AI is concerned, which is something that Western tech companies have said that they really, really
[6:58] don't want. And they put the question out to everybody in the West is that, is that something
[7:04] that we want? And because so many governments around the world are taking a very, very soft touch,
[7:10] as far as AI companies are concerned, at the moment, it would appear that a lot of regulators
[7:16] are in agreement with those tech companies. What's it mean? You said Musk is asking for,
[7:21] well, quite big damages that he wants to reinvest back into the non-profit arm,
[7:25] but also the removal of Sam Altman potentially. What will it mean if Elon Musk is victorious in this?
[7:32] It would be a really big deal if he is victorious. I mean, Sam Altman isn't just a figurehead. Sam Altman
[7:39] is intrinsic to OpenAI. When you think of OpenAI, you think of Sam Altman as its CEO. So his
[7:46] removal would be absolutely seismic for the company. But at the moment, it's anybody's guess
[7:53] which way this is going to go. This nine person jury has got to decide who is more credible.
[7:58] Is Elon Musk more credible or is Sam Altman more credible? And you've got to be in the courtroom.
[8:04] I don't know if you've ever been in a courtroom and you've weighed those things up and it's up to
[8:09] those nine people to make that decision. But what would it mean for OpenAI? It would mean, well,
[8:15] it has the potential to mean that OpenAI becomes a very, very different company. It looks like a
[8:19] very, very different company. It has the potential to mean it has to pay out, make a big, big payout
[8:22] to Elon Musk. But whether that's likely to happen, who knows?
[8:26] And just to put that in context, this is a company that is planning to value itself,
[8:31] hopefully at one trillion dollars later this year when it floats publicly.
[8:35] So it's got big implications for business as well.
[8:38] Yes. Yeah, yeah, yeah, yeah. There is the race to float it. And industry experts have been observing
[8:47] it and looking at it very closely. And everybody knows that this is what they want to do, that this
[8:50] is what's going on in the space, broadly in the space. Anthopik is trying to do the same thing.
[8:55] OpenAI are trying to do the same thing. So yes, it does have an effect on things like valuations
[9:00] and on the business of AI. But the business of AI at the moment is one which is slightly in flux.
[9:07] Talk of a bubble is perhaps inflated. But then when we look at valuations of some of these companies,
[9:15] they're also quite inflated as well. These are companies where it's quite difficult for them
[9:21] to show anything approaching a return on investment for any length of time.
[9:25] Because they cost so much money to run.
[9:27] Yeah, they cost an enormous amount of money to run them. It cost an enormous amount of
[9:30] money to develop these technologies. So as a consequence, seeing any kind of profit from them
[9:35] isn't likely to occur for a very, very long time. So what's investors' appetite for waiting to see
[9:42] that? Yeah, there's the potential that these could be the most powerful companies in the world. These
[9:45] could be the next Googles, the next Apples, all of these kind of things. But that's potential and
[9:50] that's risk. But hey, that's the markets. It's all risk.
[9:53] Mark, thank you very much. That was Mark Chislak,
[9:56] our AI correspondent. And if you want to hear more from us,
[9:58] you can listen to the Global News Podcast. Just click the link below.
Transcribe Any Video or Podcast — Free
Paste a URL and get a full AI-powered transcript in minutes. Try ScribeHawk →