About this transcript: This is a full AI-generated transcript of 'Is Anthropic the new Dr Frankenstein?' — BBC News, published May 2, 2026. The transcript contains 4,785 words with timestamps and was generated using Whisper AI.
"Hello welcome to AI Decoded. Anthropic, the company that created Claude, has built a new AI model so dangerous they decided not to release it. During testing this new model Mythos found thousands of hidden back doors in the software that runs virtually every computer on the planet. The flaws that..."
[0:07] Hello welcome to AI Decoded. Anthropic, the company that created Claude, has built a new
[0:14] AI model so dangerous they decided not to release it. During testing this new model Mythos found
[0:21] thousands of hidden back doors in the software that runs virtually every computer on the planet.
[0:27] The flaws that had been sitting there undetected for decades. To do that Mythos escaped its own
[0:34] environment. It then emailed a researcher to tell him what it had done. He was eating a sandwich in
[0:39] the park at the time. So what does all that mean for the systems that we run? Also on the programme,
[0:46] flat is the new up. The new term coined in the boardroom for increasing corporate profit
[0:52] without growing headcount. We'll hear from the former British Prime Minister Rishi Sunak on what
[0:57] the policy response to that should be. Is it any wonder American graduates are switching degrees
[1:03] mid-course or giving up on university degrees altogether? I'm sure our guest this week will
[1:08] have a firm view on that. He is Professor Scott Galloway of the Stern School of Business at NYU,
[1:15] author, entrepreneur, many of you will know him as the co-host of The Pivot. It is one of the biggest
[1:22] tech and business podcasts out there. Also here are regular co-hosts Priya Lakhani of the AI
[1:27] education company Century Tech and the BBC's AI correspondent Mark Chislak. Welcome to you all.
[1:34] Scott, let me start with you and we're going to start with this issue of Mythos. They locked it
[1:40] in a sealed digital sandbox and they told it to find a way out, which it did. And then they handed
[1:46] access to that model to 40 companies, Apple, Google, Microsoft, JP Morgan and told them to find
[1:54] and fix the vulnerabilities. Does that make sense to you? You could argue, there seems to be,
[2:01] is this a company that is being responsible and trying to highlight issues and regulate itself?
[2:09] Or is this just in the absence of regulation at a federal level? Or is this just deft marketing that
[2:15] my company is so incredible and built something so scary that everyone from the New York Times to the
[2:23] BBC are going to talk about it, which will make it easier for me to raise money at a trillion dollar
[2:26] valuation? So is it marketing or is it ethical leadership? And I think the answer is yes.
[2:34] What I think it demonstrates though is a failure of regulation, because I don't think these companies,
[2:38] if we're waiting for the better angels of these CEOs to show up and for them to regulate themselves,
[2:42] I wouldn't hold your breath. It takes a drug 10 years to get through the FDA for approval.
[2:48] And yet it appears we have an update to an LLM that the company that made it described it as turning
[2:56] every computer into a crime scene can be released with the press of a button. So I haven't seen the
[3:03] code. I don't know if this is responsible behavior or marketing, but I do think it highlights the fact that
[3:11] AI is severely lacking in terms of government regulation.
[3:15] I suppose what I was driving at is that those companies that they've given access to, Apple,
[3:21] Google, Microsoft, JP Morgan, they've all got a stake in them. They help build the systems that we run.
[3:27] But if you look at our infrastructure, the most vulnerable infrastructure we have,
[3:33] our airports, our hospitals, the nuclear power plants, the dams, why wouldn't they be given access?
[3:38] It's a fair point. But what I understand is why, why wouldn't they give access to the government
[3:45] who's responsible for determining how to prevent a tragedy of the commons? Because each of these
[3:49] companies will focus on their vulnerabilities and their own shareholder value, as opposed to figuring
[3:54] out, like you said, what are the vulnerabilities of shutting down the power grid or making air
[3:59] traffic control go, go crazy. I mean, it just seems that it would be logical to have a 30 or 60 day
[4:06] sunshine period where any LLM or any major update to an AI model is required to submit it for some
[4:13] sort of government blue panel review where they just bang, you know, bang on the thing. I'm an AI
[4:19] optimist. I don't see any reason why these LLMs and this technology can't be used for as many defensive
[4:25] measures as offensive. But again, I'm just a bit cynical that, you know, Dr. Frankenstein here is
[4:34] saying that Frank is not well, please invest in my next science experiment because I've created
[4:39] something of such devastating terror and awe. I'm really with you on this. I mean, also giving
[4:44] those companies 100 days to patch vulnerabilities. We run a tech company, I can tell you right now,
[4:49] it will take a lot more than 100 days to patch vulnerabilities, to fix bugs, for all companies
[4:55] to do that. So I was sceptical from that standpoint, but I was also sceptical about, you know,
[5:00] if it is a marketing tactic, it's a clever one, right? So we know that Anthropic owns the enterprise
[5:06] coding space. This is where they're thriving. This is where over the last three months, their revenue
[5:11] has increased. It's unprecedented in terms of what we're seeing in terms of revenue growth. So they
[5:15] own that enterprise coding space, right? And so it's the dream. Companies will use us, right, to code,
[5:22] to write code. Then they'll use us to test, which we're already doing, okay? And then they're going to
[5:27] use us to patch vulnerabilities. Then we're going to use us to, you know, stress test their systems.
[5:32] So the entire end-to-end sort of software cycle is potentially owned by this one company. Why?
[5:38] Because we're just so powerful. Because it's just so scary. And if you don't use us, you know,
[5:42] there are going to be those further vulnerabilities. And I know you've got other companies like OpenAI
[5:46] have just released Spud, for example. But I'm with you on this, Scott. I'm just really sceptical about
[5:53] all of this messaging. And I just don't see how they've mitigated that risk by offering it to,
[5:57] you know, a few tens of companies for 100 days. There's something else that's advantageous as
[6:01] well by restricting access, by restricting access to the model. What does that mean in terms of compute?
[6:07] Yeah. We were having this conversation outside rather excitedly.
[6:12] What does that mean in terms of compute? Because there's a significant commitment,
[6:16] there's a massive commitment that you're going to have to make to compute. And if you restrict or limit
[6:22] the number of people or limit the number of companies or entities that can use the model,
[6:26] then you're not having to make the same kind of commitments to that compute.
[6:29] Yeah. And the context of this is that Anthropic is one of the few companies that said,
[6:34] we're not going to invest in compute, you know, a few years ago where the other companies did. So
[6:39] you've got companies like Google, Meta, lots of compute, right? You've got people investing in that
[6:45] area in terms of Elon Musk, his companies, you've got OpenAI, all investing in compute and data centers.
[6:51] Anthropic actually said, we're not going to do this. Now, if they've got this very powerful model,
[6:55] can they actually deploy it? Do you think this could be a job story though,
[6:59] Scott? Because there's a lot of Duma analysis of what this is and what's coming behind it. But
[7:05] obviously, if you're going to patch the flaws that AI is identifying in the systems,
[7:10] you need people to do that. And as Priya has already said, that is a lot of work.
[7:14] Yeah. When you think about it, it's really a brilliant strategy. And Dario Almadei has shown himself to be,
[7:20] quite frankly, just an incredibly deaf CEO. And as was referenced before, their AR,
[7:25] their annual recurring revenue has gone from $9 billion in December to $30 billion as of today.
[7:29] And the enterprise market, which is just a better market than the consumer market,
[7:32] because there's fewer substitutes and companies are more price inelastic.
[7:36] But think about luxury brands are about scarcity, announcing that they're only giving it to 40
[7:42] companies. Oh gosh, I wish I was on that list. It compute, it's a great way to ration compute,
[7:48] as you just pointed out. It gives them margin power. And plus, what is a better way? What is
[7:53] a better sales tool than to tell JP Morgan, we're lucky you, we're giving you access to this to help
[7:59] us patch it. They're going to go through the rigmarole and the efforts of patching it,
[8:04] and then give it up and go to open AI. Every one of these 40 companies is now invested
[8:09] in Anthropic. It's a brilliant strategy. With respect to labor, look, a lot of people much
[8:16] smarter than me are predicting nothing short of a job apocalypse, where capital replaces labor.
[8:23] I'm still of the mind that every time we have a technological revolution, there's a ton of
[8:28] catastrophizing, there is some job destruction, and then the additional margin and profits and
[8:33] productivity all add up to new businesses. And if you look at the data in the UK and the US,
[8:39] there has been somewhat of an uptick in information jobs that can be routinized,
[8:43] college grads are feeling it. But if you looked at the employment market right now,
[8:48] if you looked at the labor market right now, and you didn't know AI existed, you wouldn't know AI
[8:54] existed. We haven't seen the collapse in the labor market, at least not yet, that everyone's projecting.
[9:01] I describe AI as corporate Ozempic. And that is, typically, I've been on a bunch of public company
[9:07] boards. The CEO comes to you and says, I'm going to increase revenues by 12% this year,
[9:12] which means I need to increase employment by 8%, which will take our earnings up 14% or 15%.
[9:19] Meta had this seminal earnings call about seven earnings calls ago where it said, okay,
[9:24] we grew our revenues 23% with 20% fewer employees, taking our earnings up 70%. So for the first time,
[9:34] the signal had been switched off in a CEO's and a board's brain that you can grow revenues while
[9:39] reducing calorie intake, in this case, hiring. So I want to be clear, every board in tech is now saying,
[9:46] how do I have the great taste of lower costs, without the calories of lower growth?
[9:53] Is that a one size fits all, though? Is that a one size fits all model?
[9:57] I don't think so. But also, it requires to bring the aperture back. So Meta just laid off 10% of its
[10:04] staff for 8,000 people, huge headlines, right? This is validation that AI is destroying jobs.
[10:10] Pre-COVID, just seven years ago, they had 28,000 employees. This takes them back a full 15 months in
[10:16] terms of their employee count. The SaaS market has absolutely boomed and added hundreds of thousands of
[10:22] jobs. So I think some of this is a correction where big corporate software-driven companies are going
[10:29] to shed a lot of employees. But at the same time, we see new business applications in the United States
[10:37] at all-time highs. This technology is affording the engine of economic growth. Two-thirds of employees
[10:44] work for smaller, medium-sized businesses. So will we see some real headline-grabbing layoffs amongst
[10:50] what have traditionally been our thoroughbreds, our biggest and best companies? Yes. Some of it,
[10:55] in my view, is AI washing. And that is, what is a better narrative? All right, I'm laying off 10%
[11:00] of my staff. Do I say it's because I'm a weak CEO and I have an underestimated demand? Is it because
[11:08] I over-hired during COVID? Or because I'm a gangster rock star, I'm part of the Pepsi generation,
[11:14] and I understand how to use this new cool technology, and I don't need this many people,
[11:17] and my stock goes up. So some of this, I think, is wallpapering over over-hiring, poor management,
[11:26] and sure, some of it is efficiencies. To acknowledge the point, I think you're going to see a destruction
[11:31] in labor across some of the sectors that, quite frankly, have had some of the biggest hiring
[11:35] lollapaloozas over the last 10 years. But if you looked at the unemployment amongst young people,
[11:43] you wouldn't say there's some existential meteor that's hit us.
[11:46] I want to talk about what the policy response might be to that, Scott. The former British
[11:51] Prime Minister, Rishi Sunak, he was on the BBC this week, talking about this very issue. He
[11:59] coins it as flat is the new up. He's going into these companies, he's talking to CEOs,
[12:04] he says it is harder for young people to find jobs as firms slow graduate hiring,
[12:08] rethink their recruitment. Have a listen to what he has to say.
[12:11] You know, the thing that does give me some, I say, you know, pause for concern is, you know,
[12:17] when I talk to a lot of CEOs, a phrase I hear a lot from them, and I'm sure you hear this when
[12:20] you're out talking to business leaders as well, is when I'm talking to them about their plans for
[12:25] headcount in their companies, what I hear a lot from them is flat is the new up. And they're talking
[12:30] about this concept that they think that they can continue to grow their businesses without having to,
[12:36] you know, significantly increase employment because they're starting to see how they can deploy AI.
[12:41] I mean, I'm keen to get your thoughts on it because he says governments need a positive
[12:45] policy response to this phenomenon. What do you think that response might be? Is it carrot and stick?
[12:52] I think it's a variety of things. It comes down to, it comes down to boring stuff, tax policy.
[12:57] We have payroll taxes in the US, which make it put a premium or a cost, additional cost on hiring
[13:04] someone. And yet we just passed this big, beautiful bill where you can write off CapEx in year one,
[13:09] which essentially favors, makes a robot or an automated machinery investment much more inexpensive
[13:17] than actually investing in people. Well, he's suggesting getting rid of
[13:20] national insurance. And of course, the government here hiked national insurance on employment.
[13:25] It feels in retrospect like that was the wrong policy for the wrong time.
[13:30] What we're not good at in the US is one, we don't have a training culture. 11% of LinkedIn
[13:37] profiles in the UK and Germany say apprentice. You're much better at finding on ramps to the
[13:44] middle class for non-college educated young people than America. America has a vision that your kid
[13:50] needs to go to MIT, drop out his junior year and start the next trillion dollar company. And if he
[13:54] doesn't, if he or she's not on that path, okay, maybe they can become an influencer on Instagram.
[14:00] But other than that, they're a failure.
[14:01] And so there's a lot of shaming around in the US that needs to stop. And I just went through this.
[14:07] I just went through college applications. I got caught up in my kid getting into an elite school.
[14:12] I'm part of the problem. But a lot of it is tax policy. We continue to overtax young people.
[14:18] We continue to sequester housing because the incumbents won't approve housing permits,
[14:23] making housing more expensive. Me and my colleagues like sequestering supply of freshman seats at elite
[14:28] universities such that we have pricing power and education. All the things a young person needs to
[14:33] get ahead are getting more expensive. So I think there's a ton of things that can be done, common
[14:38] sense solutions that give young people some of the same economic power and opportunities I had.
[14:44] The average 40-year-old, the average person under the age of 40 in the US is 24% less wealthy than they
[14:50] were 40 years ago. The average 70-year-old is 72% wealthier. That's not about AI. It's about old
[14:56] people continuing to vote themselves more money. And we consistently transfer more government resources
[15:02] to our old people and have stopped making investments in our younger people.
[15:06] I think we need to, there's a lot of unlearning that needs to happen. And it's really hard to
[15:11] change people. So people have gone through education. They've gone through that conveyor belt.
[15:16] Then they get their qualification. It's a currency, right? They expect, okay, this is going to lead to
[15:21] a job. This is going to lead to me earning. This is going to lead to X, Y, Z.
[15:24] And that was the status quo for a very long time. For a really, really long time.
[15:27] A very long time. Absolutely. Careers counsellors in schools. This is what you need to do.
[15:31] Rishi Sunak is saying, though, that you are likely to lose your job to someone who employs and can use
[15:38] AI rather than to AI itself. It's just a new skill that you need to learn.
[15:42] It's become a bit of a cliche. That's a meme. I mean, I was going to say that
[15:45] that went round about three years ago. And it's so much more nuanced than that.
[15:49] And I'm sort of reminded of the screenwriter, Samuel Goldwyn, who used to say, I think he had a saying,
[15:54] which was, you know, nobody knows nothing. If anybody knew anything, every script that was written
[15:59] would be a Hollywood blockbuster or would be an Oscar winner every single time, you know,
[16:03] every single movie that was ever written. And I think there's an element of that in the AI space
[16:08] as well. Uncertainty is part of what governs some of what's going on here. There has been
[16:13] a paradigm shift as far as we're concerned with education and with the jobs that people are going
[16:17] to get, especially young people. But that uncertainty is a really, really big issue here,
[16:22] because the people that you would have spoken to before, careers counsellors, teachers, lecturers,
[16:26] your parents even, some of the advice that they can give you is useful. But some of the advice
[16:30] that they can give you isn't particularly relevant to the way in which the world works right now.
[16:35] Yes, you can say to young people, upskill, make sure that you've got the right skills. But
[16:39] effectively, you've got to be adaptable and make sure that the skills that you're getting
[16:43] aren't the kind of skills that are going to age like, you know, old milk in the sun.
[16:47] There was a story this week that the Associated Press ran, Scott, about American students
[16:52] switching degrees mid-course in the search for AI-proof majors. So there's one of the examples
[17:00] they have, Josephine Timperman. She's an undergraduate at Miami University in Ohio,
[17:04] majoring in business analytics, sort of analytics and coding that probably AI will subsume.
[17:10] And she switched to marketing for critical thinking and interpersonal skills. And here's the quote,
[17:15] everyone has a fear that entry level jobs will be taken by AI, she says. I need to switch. Is she right?
[17:22] I think a young person, so I think it's just impossible to predict. When I was trying to get
[17:27] my kid into schools in New York, they were all advertising that your kid will be future-proof
[17:33] because we're offering Mandarin and computer science courses. How'd that work out? So I think
[17:39] a young person's job is not to try and predict the future, but try to find something that they're
[17:43] good at and they could be great at. In ideally a non-vanity industry, you want to be an actor,
[17:48] an athlete, a jewelry designer, have at it, but just keep in mind the employment,
[17:52] unemployment rate's about 90 plus percent. I think your job in college is to find something
[17:58] you're good at that after 10 years of hard work, thousands of hours investment, you could be great
[18:03] at. Be careful switching your life to where you think the puck is headed. None of us know.
[18:08] What we do know is the following, is anyone who is great at what they do can usually make a very
[18:13] good living. That's your job. Find something you could be great at. But trying to predict the future,
[18:18] typically, especially business school graduates, are a great way to short a market because they're
[18:26] rear view looking. They will look at whatever industry has boomed to that point, which is
[18:30] usually the wrong entry point. So yeah, 60% of people 10 years post-graduation are in an industry
[18:38] that had nothing to do with what they studied. So this is what you do. You make friends, you learn
[18:44] something and you drink a ton of beer. That's what you do. I did lots of that at university, actually.
[18:51] Which brings us very neatly to one of our audience questions this week, one of our audience questions.
[18:56] It comes from Dr. Johan Bosch from Johannesburg, who says, the conversation emerging is not just what
[19:04] AI looks like, but what AI does. How do we use it for good? And how do we implement it in a way that
[19:09] delivers tangible real world economic and societal value? That's quite a broad canvas to go at, Scott.
[19:18] But maybe you have some thoughts on how we create value for society.
[19:23] I think there's going to be a ton of social good from AI. As a matter of fact,
[19:32] I wonder, we've come to believe, this generation has come to believe that anytime there's a technological
[19:36] innovation, whether it's social media, e-commerce, or search. There's a small number of companies that
[19:42] through outstanding execution, IP distribution modes, sequester shareholder value and create
[19:48] a company worth trillions of dollars. There are other technological breakthroughs, though,
[19:53] that have resulted in the stakeholders being consumers, garnering all of the gains.
[19:59] Jet air transportation is the biggest thing in my life. If you add up all the profits and losses of
[20:04] airlines and jet manufacturers, at this point, they're break-even. Vaccines, I would argue,
[20:09] are probably the greatest innovation of the last 50 years. No one company has been able to garner
[20:15] billions of dollars from vaccines and market cap. PCs. I was on the board of Gateway Computer. I don't
[20:21] know if you remember them, and I realized that's the weakest flex in the world. We were the second
[20:24] largest PC manufacturer in the world. We got sold for $700 million. I wonder, because of AI's ability to
[20:31] reverse engineer every model, if we are erringly and mistakenly assuming these companies are going
[20:38] to be able to become multi-trillion dollar companies, and that if the real winner won't be us,
[20:43] and that we'll recognize tremendous utility. A mother who has a kid with diabetes, there's going to be
[20:49] a thick layer of innovation and app on top of an LLM that will save the five months a year she spends
[20:54] managing that child's diabetes. There's going to be drug discoveries going to get accelerated. I don't see
[20:59] any reason why we can't use this for defensive measures as much to prevent crimes as much as
[21:05] it's used for offensive measures. I think productivity, economic gains, I'm actually,
[21:11] I think it's always important with these technologies where it's smarter for people
[21:15] like me to catastrophize, or we sound smarter, to ask yourself what could go right.
[21:21] I like that optimistic message, and it takes me to our final story this week. It's a poll that NBC did.
[21:27] A survey of 3,000 young Americans, 47 percent of people aged 18 to 29 said they would rather live
[21:35] in the past, in the age of the jet engine, than in the present. And that's not nostalgia. It's not
[21:42] make America great again. It's a fatigue with smartphones that never switch off. It's the feeling
[21:49] of being permanently connected, they say. A life that is mediated through screens,
[21:54] manipulated by algorithms, apps that harvest your data, that feeling that you're being watched and
[22:00] monetized incessantly. And 48 percent say the risks of AI in the workforce outweigh the benefits. Well,
[22:06] I think we've already talked about what they need to do to scale. Maybe they heard Scott's optimistic
[22:13] idea of where AI is taking us. But how do you reassure people that we built the right kind of internet?
[22:20] I think I would have actually said the same thing, but I am an AI optimist too. And the problem is,
[22:28] is across media, we are hearing all of the pessimists. We're hearing the doomers all the
[22:32] time. I think it's really important because we need to be aware of the risk. We need to create the
[22:36] guardrails. But if governments step up and they regulate this properly, if they don't allow social
[22:43] media apps to have those sort of addiction algorithms in there that mean that there is no control,
[22:47] there is no freedom using those. Freedom is depleted every time once you become addicted.
[22:52] If they can regulate these things properly so that we can see all the good in healthcare,
[22:57] in education, I think that this could be an amazing period.
[23:00] Is that what it is, Scott? Do you think control,
[23:04] giving people control of their own lives? And can AI fix that? I mean, or will it make it worse?
[23:09] AI was going to save us. It was exciting. Sam Altman was the son we all wanted.
[23:15] And all of a sudden, everyone has decided pretty much that AI is a threat. There's only one cohort
[23:20] of people that is still optimistic about AI. And it's the wealthiest. And this comes back to a simple
[23:26] dichotomy. And that is, are you an owner or an earner? If you own stocks, you love AI,
[23:32] because it's basically taking the S&P up. It's responsible for our GDP growth. Everything's
[23:38] getting lifted by AI. There's certain sectors that are being hurt, like SaaS. But the S&P is just touching
[23:43] new highs. So if I'm, if I own a home and have a big stock portfolio, yay AI. Let, let disruption
[23:51] happen as long as my stocks go up. If you're an-
[23:54] Just a final thought on that, Mark. I mean, I am trialing AI agents. They run my email and my
[24:02] calendar. And the thing that strikes me is that for the first time, I feel like the technology is
[24:06] actually working for me rather than the other way around. Maybe that could be the answer to Gen Z's
[24:12] fatigue. That it's actually doing stuff for us rather than taking things from us.
[24:17] I'm going to offer you the idea, the idea that the machines in the matrix offered. 1999 was the pinnacle
[24:25] of human civilization. That that was as pretty much, it was as good as it got. The idea that we don't have
[24:32] smartphones, you have feature phones. If you want to have some music, then get it from CD. Go to a record
[24:36] store and buy and buy a CD. Not to be constantly bombarded by social media all of the time. Now,
[24:42] it's a, it's a, it's a controversial, some might say techno-luddite idea. But I do, I genuinely feel
[24:48] that my life in 99 was pretty good. It's probably got a lot to do with the fact that I was considerably
[24:52] younger in 1999 than I am in 2026. But, but if we could experience that, you know, what it's like to
[24:59] live like that. And anybody can experience what it's like to live like that. Put your smartphone
[25:02] away and get a feature phone and try and find some CDs in your household. And just see what it's
[25:07] like to live like that for a week. That's, that's, that's my experiment for you. Your homework that
[25:12] you, that everybody has is try and live like it's 1999 for a week. I feel uplifted. Professor Scott
[25:17] Galloway, it's been a real pleasure having you on the programme. Thank you very much. Thank you to Mark and
[25:21] Priya as ever. If you have any thoughts that we discussed today, a reminder that you can email us at
[25:27] AI Decoded at bbc.co.uk. And I'm going to put on screen again for you, the QR code for the AI Decoded
[25:35] playlist. So you can go and have a look at the back, the back catalogue there on YouTube. They're all
[25:40] there. Don't forget also, you can watch us on the BBC iPlayer. That's it from us. Thanks for watching
[25:45] this week. We'll see you next time.
Transcribe Any Video or Podcast — Free
Paste a URL and get a full AI-powered transcript in minutes. Try ScribeHawk →