About this transcript: This is a full AI-generated transcript of 'Terrifying warning sign': Anthropic delays AI model over security concerns, published April 9, 2026. The transcript contains 1,708 words with timestamps and was generated using Whisper AI.
"Anthropic has decided not to release its latest AI model called Clawed Mythos to the full public. Now, this version is apparently so advanced it can find vulnerabilities in a huge variety of software applications, which is both a great advancement potentially for improving cybersecurity and also a..."
[0:00] Anthropic has decided not to release its latest AI model called Clawed Mythos to the full public.
[0:06] Now, this version is apparently so advanced it can find vulnerabilities in a huge variety of software applications,
[0:12] which is both a great advancement potentially for improving cybersecurity
[0:16] and also a huge danger if used by criminals or others to hack into systems.
[0:21] For now, Anthropic is only making this system available to some of the biggest tech companies
[0:25] so they can test it and improve their own systems.
[0:29] The fear, of course, if it falls into the wrong hand, cyber criminals, spies, the fallout could be catastrophic.
[0:36] My next guest, Tom Friedman, lays out those concerns in his latest New York Times op-ed with the headline,
[0:41] Anthropic's Restraint is a Terrifying Warning Sign.
[0:44] He's also the author of the best-selling book, among many others, From Beirut to Jerusalem.
[0:49] So, Tom, you write this is potentially as fundamental and significant a turning point
[0:53] as was the emergence of mutually assured destruction and the need for nuclear non-proliferation.
[1:00] That's a huge statement.
[1:01] How do you say that and what is your biggest concern?
[1:06] Well, you know, Anderson, our economy now, all our biggest systems, our water systems,
[1:11] our airlines, our airports, our transportation and telecommunication systems,
[1:16] all run on software and operating systems, as do those of every other major economy in the world
[1:22] that we are interlinked with.
[1:25] And if, basically, we now have a software that not only is fantastically good at writing new code,
[1:34] but it turns out is fantastically good at finding bugs in your own code or other people's codes,
[1:43] that tool, that power now can be used all over the world.
[1:48] It will be in the hands of everyone.
[1:50] Imagine a world where everyone had a nuclear bazooka, basically.
[1:53] And because it's so easy, a threat, a cyber threat that used to be confined to intelligence organizations,
[2:00] big companies, which are very hard and very expensive to do,
[2:04] is to become really cheap and really easy.
[2:07] And that's why Anthropic took off.
[2:09] Yeah.
[2:09] And Anthropic, what they've actually said, what they've disclosed is that, quote,
[2:13] they say the vulnerabilities in this program, it has spotted,
[2:18] have in some cases survived decades of human review and millions of automated security tests.
[2:23] I mean, that's mind-blowing that all the current, you know,
[2:27] human review, humans looking at it and all the current, you know,
[2:31] AI security tests that they have run programs through and software through,
[2:35] all of it still had bugs that were detected by this new powerful AI.
[2:42] Well, you know, this is a point that my, you know,
[2:44] technology tutor and partner in this column, Craig Mundy,
[2:47] keeps making that this, these AI capabilities,
[2:51] they're coming so much faster than people realize.
[2:55] And that we aren't, as Craig says, birthing just a new tool.
[3:00] Anderson, we're birthing a new species.
[3:03] It's not carbon-based like we are, but silicon-based.
[3:06] But it is a new species that we're going to have to learn to control
[3:09] and collaborate before it makes us its pet.
[3:14] So this is the front end of a really big problem.
[3:17] Well, regulation, control is not a word that this White House is very interested in,
[3:23] in terms of artificial intelligence.
[3:26] Anthropic is probably one of the few AI companies.
[3:29] Dario Amadei, who left OpenAI years ago with his sister and several others to form Anthropic,
[3:35] they're probably the most kind of safety forward.
[3:38] Do you think that there will be, given now this development and others,
[3:44] some sort of move to, you know, really look at kind of regulation or even government to government?
[3:53] I mean, even with some of America's adversaries,
[3:55] you've written about the idea of the U.S. and China kind of getting together on this.
[4:00] Well, you know, I think, Anderson, this is the front end of a fundamentally new problem
[4:04] we have as a species, okay?
[4:07] So I started out in journalism, and you started out when the world was connected.
[4:11] During our tenure with the Internet, the world became interconnected.
[4:15] What's happened today is the world has become interdependent, okay?
[4:20] As my teacher, Dove Seidman, likes to say,
[4:22] interdependence is no longer our choice, it's our condition.
[4:26] Now, we are going to rise together or we're going to fall together.
[4:30] But, baby, whatever we're doing, we're doing it together.
[4:33] Now, we can discover that early or we can discover that late, but we will discover that.
[4:38] It's true about climate, but it's also true now about cyber tech and all of these communication systems.
[4:45] No company alone can solve this problem.
[4:48] Therefore, no country alone can solve this problem.
[4:51] And therefore, sooner or later, the two tech superpowers, the U.S. and China,
[4:56] are going to have to sit down and work out this problem together,
[4:59] learn how to compete and cooperate on AI, because we are interdependent.
[5:06] That is our condition.
[5:08] It's not a choice.
[5:09] Trump can learn that early.
[5:11] He can learn that late.
[5:12] But he will learn that.
[5:14] If you have, though, these powerful AI weapons at your disposal,
[5:18] and if it's an American company that has it,
[5:21] if there's an AI arms race, which is how many people describe what is happening around the world,
[5:27] isn't there a kind of a push at some point or pressure at some point for an American company
[5:34] to in some way use this for the benefit not only of their own company,
[5:38] but for a country's benefit?
[5:41] The idea of some sort of China-U.S. alliance on this seems far, far, difficult to imagine right now.
[5:49] Hard to imagine, but, you know, when I say it to people, they say,
[5:54] God, that's incredibly naive.
[5:56] I say, no, no, no, no.
[5:57] What's naive is thinking we're going to be okay if we don't do that,
[6:02] because there's no way both China and America will be hugely vulnerable
[6:06] to criminals within their own societies when they have these tools,
[6:10] much more vulnerable than they're going to be to any threat the other would pose.
[6:15] So, again, we're in the middle of a giant meta transition.
[6:20] You know, I've been working on a new book on this.
[6:22] We have become godlike as a species.
[6:24] We've just created this super intelligent being.
[6:27] There's just one problem.
[6:28] We've become godlike without the Ten Commandments.
[6:32] And, you know, we're going to sit down and together with the biggest companies
[6:37] and our biggest rivals and write a new Ten Commandments,
[6:41] Ten for us and ten for the new species we've just birthed.
[6:45] OpenAI is calling for governments to implement common sense AI regulation.
[6:51] On the substance of the proposal itself, first, what do you make of its approach
[6:57] and how do you think lawmakers will respond?
[7:00] Because Congress, historically, is fairly slow moving when it comes to new technology.
[7:07] Oh, yeah.
[7:07] Slow moving is one way to put it.
[7:09] At sea is another way to put it, right?
[7:11] They have been totally incapable of wrapping their arms around what this may represent.
[7:18] I mean, you'll remember, Boris, Andrew Yang, who once upon a time ran for president
[7:21] on the concept that we would all need to somehow draw from a universal pool of money
[7:26] because AI was going to wipe out all jobs.
[7:28] You know, we laughed at him back in the day.
[7:29] That guy must be drinking like a fish these days.
[7:32] Because, I mean, it is absolutely the case here that you now have this company saying openly
[7:37] that its technology is going to create incredible upheaval,
[7:41] so much so that what it's proposing here is pretty interesting, Boris.
[7:44] I mean, it comes up with a number of things.
[7:45] It suggests, for instance, that we should all be working a four-day work week with no loss of pay,
[7:50] that we should have a sovereign wealth fund, essentially a kind of general fund
[7:55] that would help bring some dividends back to workers who currently aren't seeing anything.
[7:58] And this is a big one which breaks from other big tech companies,
[8:02] which is the idea that the government needs to stop depending on labor payroll
[8:07] as a source of income tax and instead create a tax entirely on capital gains and corporate profits
[8:15] because basically what they're saying here is that you and I and our paycheck
[8:18] isn't going to be enough to support the government anymore.
[8:20] All that money is going to be vacuumed up into these larger corporations.
[8:24] So there's some good and interesting ideas here.
[8:26] But as you brought up so rightly, you know, the timing here,
[8:29] distracting, I think, from a big and embarrassing investigation on the part of the New Yorker
[8:33] is also worth mentioning here, Boris.
[8:35] One more question on the substance of the argument for regulation.
[8:41] Altman talks about superintelligence, the point at which AI becomes so sophisticated
[8:47] that it would out-compete or out-smart even humans using AI.
[8:52] How close do you think we are to that point?
[8:54] So I am not personally convinced that we are anywhere near that.
[8:59] I think that it is some, you know, the technical people that I speak to,
[9:02] including people who used to be at OpenAI and Anthropic and others of these big companies,
[9:07] basically say we're not on the path to that right now,
[9:10] that the kind of very smart parrot that we currently use to write our knock-knock jokes
[9:15] and our wedding vows is not going to put us toward, you know,
[9:18] an omniscient brain that's going to run all things.
[9:20] But we are already on a path where job disruption is real,
[9:24] where human anxiety around AI is real.
[9:27] And so what's interesting here is to see a company that is really at the center of that disruption
[9:32] calling this kind of a new deal for AI, trying to invoke the Roosevelt era.
[9:39] But of course, that was an era grappling with the depression,
[9:42] and this is the company that is essentially saying we may cause a depression here,
[9:46] and so we're going to need a national response to deal with it.
[9:49] There's a circular logic to this kind of statesmanship on the part of a company
[9:55] that really seems to be driving the change it's worried about Boris.
Transcribe Any Video or Podcast — Free
Paste a URL and get a full AI-powered transcript in minutes. Try ScribeHawk →