Try Free

Anthropic’s powerful new AI model raises concerns about high-tech risks

April 10, 2026 6m 1,249 words
▶ Watch original video

About this transcript: This is a full AI-generated transcript of Anthropic’s powerful new AI model raises concerns about high-tech risks, published April 10, 2026. The transcript contains 1,249 words with timestamps and was generated using Whisper AI.

"ANTHROPIC announced this week it has begun limited testing of its newest AI model called Mythos, one the company says is so powerful it could cause widespread disruption if released to the public. It's just generally better at pursuing really long-range tasks that are kind of like the tasks that a..."

[0:00] ANTHROPIC announced this week it has begun limited testing of its newest AI model called Mythos, [0:05] one the company says is so powerful it could cause widespread disruption if released to the public. [0:11] It's just generally better at pursuing really long-range tasks that are kind of like the tasks [0:18] that a human security researcher would do throughout the course of an entire day. [0:23] Obviously, capabilities in a model like this could do harm if in the wrong hands. [0:28] And so we won't be releasing this model widely. [0:31] For now, ANTHROPIC is giving more than 40 tech companies, including some rivals, [0:36] access to Mythos to test it and identify vulnerabilities across systems. [0:41] But even that move is raising concerns. For a closer look at all of this and the implications, [0:45] we're joined now by Garrett Vandewick, who covers AI for The Washington Post. Thanks for being with us. [0:51] Of course. [0:52] So help us understand the concern here. What specifically makes this model different from [0:56] other AI models and why is there so much, frankly, fear around it? [1:02] The specific concerns that are being called out here is that this model is really good at finding [1:07] gaps in software that hackers could exploit. So right now, all software has bugs, [1:12] but software is pretty complicated and you need to kind of really know what you're doing in order to [1:16] sift through all that code to find something that you could then use to hack into a system. And what [1:21] ANTHROPIC is saying and some of the independent cybersecurity experts that they've also given [1:26] access to this model to are saying is that this can essentially do that automatically. It can sift [1:31] through all sorts of code. Something that might take humans who are very good at this months to do, [1:37] it can do in minutes or hours. And so the concern here is that if this is sort of out in the public, [1:43] anyone can use it, that anyone who wants to hack into any kind of software for whatever reason [1:48] would be able to do it using this technology. And that's why the company is saying at least they're [1:53] sort of keeping it under wraps for now. Keeping it under wraps, but also giving, [1:57] as we mentioned, some 40 other companies, including Microsoft and Nvidia access in part to strengthen [2:03] their own cyber defenses. What do we know about that decision? Does sharing it more widely [2:07] actually reduce the risk or potentially increase it? Yeah, I mean, there is a bit of a precedent here [2:14] in cybersecurity. Often if one company finds some lack in another company's software, instead of [2:20] just giving it to the public and, you know, creating a situation where that other company [2:24] could be hacked, they will sort of go, you know, behind the scenes and say, hey guys, we found this. [2:29] You might want to fix this before the rest of the world figures it out. And so I think it's sort [2:33] of in that tradition that they're doing this. But of course, some people are saying, hey, now we have [2:37] all these powerful tech companies that have access to this allegedly extremely powerful tool for [2:42] cybersecurity. Well, is it also powerful for other things, you know, other things that they could use [2:47] to, you know, increase their business, get an edge on other companies? So there are some complaints [2:52] that, you know, if this thing is really so good, why don't you let the rest of the world actually see [2:55] it for themselves? And then we can decide what to do with it. Logan Graham, who's one of Anthropix [3:01] researchers, suggested that if this AI program were fully released, it could force widespread software [3:08] updates, eventually exposing weaknesses everywhere. Is that a realistic scenario or is he in some ways [3:14] overstating it? Yeah, potentially. I mean, it's difficult because, you know, besides these companies, [3:21] no one has really been able to get their hands on it. I think we always need to take these big AI [3:25] companies with a grain of salt. It's not the first time an AI company has said, oh my goodness, [3:29] our new technology is so powerful, we should be afraid of it. You know, it's great marketing, [3:34] right? Because if something is so powerful that it could, you know, change the world or cause chaos, [3:39] it's also very powerful for doing other things. And so I think we need to be careful. You know, [3:44] I don't, I'm not necessarily saying that Anthropic is lying or misleading the public here. I, I'm sure [3:49] they are very legitimate about these concerns, but I do think that we're already in a situation where [3:53] cybersecurity is pretty atrocious. I mean, everyone's personal data has been hacked at some point. [3:59] If anyone really wants to get into a software system, if they have the resources, [4:03] the, you know, incentive, they will probably be able to do it. We already live in a world where [4:08] software is broken and needs to be updated constantly, right? Every time you open your [4:13] operating system, it's probably pinging you to update the apps that you have on your computer, [4:18] right? That's because of the cybersecurity situation we have right now. And in the same way that this [4:24] mythos technology could be used to hack into computers, it could also be used to defend against [4:30] hacks. And so a lot of the cybersecurity experts are saying, look, yes, this is concerning, but we [4:35] can also use this technology. The good guys can also use it to protect us. And so it doesn't [4:40] necessarily completely change that balance of power that we have right now. Well, say more about that, [4:45] because there is this strange disconnect where you have now even the AI companies themselves warning [4:50] about the potential dangers. And this is as the AI companies are also racing to release more powerful [4:56] systems at the same time. What accounts for that? Yeah. I mean, I think, you know, it's very easy to [5:02] sort of point that and say like, look, like what's really going on here. And I think, you know, each AI [5:07] company is slightly different. They have different incentives, but it's true. I mean, they are all in this [5:11] extremely competitive race to build the best AI system. It's very expensive to train these things. It costs [5:19] hundreds of millions of dollars to develop each new version of this AI technology, and very few [5:25] companies are able to do it. And the entire tech industry is in agreement that this is, you know, [5:29] the most important technology to come out probably since the internet itself. And so there's a huge [5:34] amount of money that is incentivizing the development of this technology. At the same time, a lot of the [5:39] people who work at these companies do legitimately believe that there are concerns that it could be [5:44] used for cybersecurity. It could be used for misinformation. It could, you know, some people even [5:48] believe that it could, you know, become so smart in the coming years that humans are, you know, [5:53] challenged to keep it under control. And so I do think that those are real beliefs held by some [5:58] people at these these these companies. And yet they are locked in this competitive dynamic. [6:03] Garrett DeVink covers AI for The Washington Post. Garrett, thanks again for being with us. [6:19] Support journalism you trust. Support PBS News. Donate now or even better, start a monthly contribution today.

Transcribe Any Video or Podcast — Free

Paste a URL and get a full AI-powered transcript in minutes. Try ScribeHawk →