About this transcript: This is a full AI-generated transcript of Ronan Farrow on Sam Altman's strained relationship with the truth — Decoder from Decoder with Nilay Patel and The Verge, published April 17, 2026. The transcript contains 10,216 words with timestamps and was generated using Whisper AI.
"There's a line in the story that to me feels like the thesis of the story. Sam Allman is unconstrained by the truth in that he has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction, and the second is an..."
[0:00] There's a line in the story that to me feels like the thesis of the story.
[0:03] Sam Allman is unconstrained by the truth in that he has two traits that are almost never seen in
[0:08] the same person. The first is a strong desire to please people, to be liked in any given interaction,
[0:13] and the second is an almost sociopathic lack of concern for the consequences that may come
[0:18] from deceiving someone. So many other people around it who had the concerns and voiced them
[0:24] urgently just folded like napkins and changed their tune the moment they saw the wind was blowing
[0:30] the other way and they wanted in on the profit train. It's pretty dark, honestly, from my standpoint
[0:36] as a reporter. But then I think back to Sam getting fired. I took a source call at the Bronx Zoo at 7
[0:42] p.m. on a Friday and it was someone saying they're going to try to get Sam back. Well, first of all,
[0:47] sorry to your daughter and my partner and all the other people around journalists.
[0:51] It was quite a weekend for everyone. Yeah, no, it does take over one's life.
[0:56] Hello and welcome to Decoder. I'm Nilay Patel, editor-in-chief of The Verge,
[1:01] and Decoder is my show about big ideas and other problems. Today, I'm talking with Ronan Farrow,
[1:06] one of the biggest stars of investigative reporting working today. Ronan broke the Harvey Weinstein
[1:11] story, among many, many others. And just last week, he and co-author Andrew Marantz published an
[1:16] incredible deep dive feature in The New Yorker about OpenAI CEO Sam Altman, his trustworthiness,
[1:22] and the rise of OpenAI itself. One note before we go any further here. The New Yorker published
[1:27] that story and Ronan and I had this conversation before the attacks on Sam Altman's home. So you
[1:33] won't hear us talk about them directly. But just to say it, I think violence of any kind is unacceptable.
[1:39] These attacks on Sam were unacceptable. And that the kind of helplessness that people feel
[1:44] which leads to this kind of violence is also unacceptable. And it's worth more scrutiny
[1:50] from both the industry and our political leaders. I hope that's clear. All that said,
[1:54] there is a lot of swirl around Sam Altman that's fair game for rigorous reporting.
[1:59] The kind of reporting that Ronan set out to do. Thanks to the popularity of ChatGBT,
[2:03] Altman has emerged as the most visible figurehead of the AI industry, having turned a once non-profit
[2:09] research lab into an almost trillion-dollar private company in just a few years.
[2:14] But the myth of Sam Altman is deeply conflicted, defined equally by both his obvious dealmaking
[2:19] ability and the tendency, which Ronan reported, to, well, lie to everyone around him. Ronan talked to
[2:25] Altman many times over the 18 months he spent reporting the story. And so one of the main things
[2:30] I was curious about was whether Ronan sensed any change in Altman over that time. After all,
[2:35] a lot has happened in AI, in tech, and in the world over the past year and a half. You'll hear
[2:40] Ronan talk about that very directly, as well as a sense that people have become much more willing
[2:45] to talk about Altman's ability to stretch the truth. Okay, Ronan Farrow on Sam Altman, AI,
[2:50] and the truth. Here we go. Ronan Farrow, you're an investigative reporter and contributor to The New
[2:54] Yorker. Welcome to Decoder. Glad to be here. Thanks for having me. I am very excited to talk to you. You
[2:58] just wrote a big piece for The New Yorker. It's a profile of Sam Altman and sort of with it,
[3:03] open AI. My read of it is, as all great features do, it, with rigorous reporting,
[3:10] validates a lot of feelings people have had about Sam Altman for a very long time.
[3:15] You've obviously published it. You've gotten reactions to it. How are you feeling about it
[3:18] right now? I've been heartened, actually, by the extent to which it's broken through
[3:24] in a time where the attention economy is so kind of schizophrenic and shallow. This is a story that,
[3:32] in my view, affects all of us. And when I spend a year and a half of my life, and my co-author Andrew
[3:38] Morant's also spent that time of his really trying to do something forensic and meticulous,
[3:44] it's always because I feel like there are bigger structural issues that affect people, you know,
[3:50] beyond the individual at the heart of the story, beyond the company at the heart of the story.
[3:53] Sam Altman, against the backdrop of Silicon Valley hype culture, right, and startups that balloon to
[4:02] massive valuations based on promises that may or may not come to pass in the future, and an increasing
[4:08] embrace of a founder culture that thinks of as a future, not a bug, telling different groups different
[4:16] conflicting things. Even against that backdrop, Sam Altman is an extraordinary case where everyone in
[4:24] Silicon Valley who expects those things can't stop talking about this question of his trustworthiness
[4:32] and his honesty. And, you know, we knew already that he had been fired over some version of allegations of
[4:40] dishonesty or serial alleged lying. But extraordinarily, despite the fact that there's been wonderful
[4:48] reporting, you know, Keech Hagee has done great work on this, Karen Howe has done great work on this,
[4:54] there really wasn't definitive understanding of the actual alleged proof points and the reasons why those
[5:01] have stayed out of the public eye. Point number one is, I feel heartened by the fact that some of those
[5:07] gaps in our public knowledge, and even in the knowledge of Silicon Valley insiders, have now been
[5:13] filled a little bit more. Some of the reasons that they were gaps have been filled a little bit more.
[5:19] You know, we report on cases where people inside this company really felt like things were covered up
[5:25] or deliberately not documented. One of the new things in this story is that a pivotal law firm
[5:29] investigation by Wilmer Hale, you know, which is obviously a fancy, credible, big law firm that did
[5:35] investigations of Enron and WorldCom, which, by the way, were all voluminous, like hundreds of pages
[5:41] published. They did this investigation that was demanded by board members that had fired Altman
[5:46] as a condition of their departure when he got rid of them and he came back. And extraordinarily,
[5:52] in the eyes of many legal experts I spoke to, and shockingly, in the eyes of many people in this
[5:56] company, they kept it out of writing. All that ever emerged from that was an 800 word press release from
[6:03] OpenAI that described what happened as a breakdown in trust. And we, you know, confirm that this was
[6:11] kept to oral briefings. There's cases we talk about where, like, for instance, a board member seemingly
[6:19] wants to vote against the conversion from OpenAI's original nonprofit form into a for-profit entity,
[6:26] and it's recorded as an abstention. And there's like a lawyer in the meeting saying,
[6:31] well, that could trigger too much scrutiny, and it gets, the person wants to vote against,
[6:37] and it gets recorded as an abstention to all appearances. You know, there's factual dispute,
[6:42] OpenAI claims otherwise, as you might imagine. But these are all cases, essentially, where you have
[6:47] a company that, by their own account, holds our future in their hands, right? The safety stakes are
[6:54] so acute, they have not gone away. This is the reason this company was founded as a nonprofit focused on
[6:59] safety. And where things were being obscured in a way that, like, credible people around this
[7:06] found less than professional. And you couple that with a backdrop where there's so little political
[7:12] appetite for meaningful regulation. And I think it's a very combustible situation. The point for me
[7:20] is not just that Sam Altman, you know, deserves these questions so acutely. It's that any of these
[7:27] guys in this field, and many of the key figures exhibit, like, if not this particular idiosyncratic
[7:35] alleged lying all the time trait, certainly like some degree of a race to the bottom mentality,
[7:39] right, where the people who were safetyists have watered down those commitments, and everyone is in a
[7:44] race posture. I think the point as we look at, like, even recent leaks out of Anthropic is,
[7:51] there's a person who poses the question of who should have their finger on the button in this
[7:54] piece. The answer is, like, if we don't have meaningful oversight, I think we got to be asking
[7:59] serious questions and trying to surface as much information as we can about all of these guys.
[8:04] So I've been heartened by what feels like a meaningful conversation about that, or the beginnings of
[8:08] one. The reason I asked it that way is, you worked on this for a year and a half,
[8:13] you talked to, I believe, 100 people with your co-author, Andrew. That's a long time for a story
[8:17] to cook. And I think about the last year and a half in AI in particular, and boy, have the attitudes
[8:24] and values of all these characters shifted very quickly. Maybe none more so than Sam Altman, who
[8:32] started off as the default winner because they had released ChatGPT and everyone thought that would
[8:35] just take over for Google. And then Google responded, which seemed to surprise them that Google would try to
[8:42] protect its business, maybe one of the best businesses in tech history, if not business history.
[8:47] Anthropic decided that it would focus on the enterprise. It seems to be taking a commanding
[8:52] lead there because the enterprise users of AI are so high. And OpenAI's product, they're now refocusing
[8:59] away from we're going to take on Google to Codex, and they're going to take on the enterprise. And I just
[9:04] can't quite tell, over the course of reporting, over the last year and a half, did it feel like the
[9:09] characters you were talking to changed? Their attitudes and their values, did those change?
[9:15] Yes. I think, first of all, that the critique that is explored in this piece,
[9:21] coming from many people inside these companies at this point, that this is an industry that,
[9:27] despite the existential stakes, is descending into something of a race to the bottom on safety,
[9:34] and where speed is trumping everything else, that concern has grown more acute.
[9:37] And, like, I think those concerns have been more validated as the past year and a half
[9:43] has transpired. Simultaneously, attitudes about Sam Altman specifically have changed.
[9:50] You know, when we started talking to sources for this, people really, really were leery of being
[9:59] quoted about this, going on the record about this. And then by the end of the reporting, you know,
[10:03] you have a body of reporting where people are talking about this very openly and explicitly,
[10:07] and you have, like, you know, board members saying, like, he's a pathological liar, he's a sociopath,
[10:13] um, a range of perspectives from, uh, this is dangerous, given the safety stakes that we need,
[10:20] leaders of this tech that have elevated integrity, all the way up to, like, forget the safety stakes.
[10:25] This is behavior that is untenable for any executive of any major company, that it just creates too much
[10:31] dysfunction. So the conversation has become much more explicit, um, in a way that feels maybe belated,
[10:39] but is, uh, heartening in one sense. And Sam Altman, to his credit, the piece is very fair and even
[10:46] generous, I would say, to Sam. You know, this is not the kind of piece where there was a lot of gotcha
[10:50] stuff. This is like, I spent many, many hours on the phone with him as we were finishing this up and
[10:56] really heard him out. And as you can imagine in a piece like this, not everything makes it in. And
[11:02] some of those cases in this one were because, like, I was listening sincerely. And if Sam was actually
[11:08] making an argument that I felt carried water, that something, even if it was true, could be
[11:12] sensationalist, um, you know, I really erred on the side of keeping this, like, forensic and measured.
[11:20] I think that is being received rightly. And I just hope this factual record now that's
[11:26] accumulated over this period of time can trigger a more bracing conversation about the need for
[11:31] oversight. That's actually my next question. I think you talked to Sam a dozen times over
[11:35] the course of reporting the story. Again, that's a lot of, a lot of conversations over a long period
[11:39] of time. Do you think Sam changed over the course of the reporting over the past year and a half?
[11:44] Yeah, I think one of the most interesting subplots in this is that Sam Altman is also
[11:51] talking about this trait more explicitly than he has in the past. Um, the posture of
[11:57] Sam in this piece is not like there's nothing there. You know, this is not true. I don't know
[12:01] what you're talking about. The posture he has is, you know, he says that this is attributable to
[12:08] a people pleasing tendency and a kind of conflict diversion. Um, he's acknowledging that it caused
[12:14] problems for him, particularly earlier in his career. He is saying, well, I've kind of, I am moving
[12:22] past that or have to some extent moved past that. I think what's really interesting to me is the
[12:27] contingent of people we talked to who were not just sort of safety advocates, um, not just the underlying
[12:33] technical researchers who very often tend to have these acute safety concerns, but like pragmatic
[12:39] big time investors, um, who are backers of Sam's who, in some cases, look at this question and talk
[12:46] about even having played a key role in him coming back after his firing. Um, and now say on this
[12:54] question of like, has he reformed to what extent is that change meaningful? Uh, well, we gave him the
[13:01] benefit of the doubt at the time. And I'm thinking of, you know, one prominent investor in particular
[13:06] who said, but since then, like, it seems clear he wasn't taken out behind the woodshed was the phrase
[13:12] that this one used, um, to the extent that was necessary. And as a result, it seems like this is
[13:18] now like a stable trait. Like we're seeing this in an ongoing way. Um, and you can look at some of
[13:24] open AI's biggest business relationships and the way they kind of carry the weight of that mistrust,
[13:31] um, in, in an ongoing way, like Microsoft, uh, you talk to executives over there, there's really acute
[13:39] and like recently catalyzed concerns. Um, there's this instance where in the same day, open AI is
[13:47] reaffirming their exclusivity with Microsoft with respect to underlying stateless AI models.
[13:54] And then announcing a new deal with Amazon that's to do with selling enterprise solutions,
[13:59] uh, for building AI agents that are stateful, meaning they have memory. And you talk to Microsoft
[14:07] people and they're like, that's not possible to do without interacting with the underlying stuff that
[14:14] we have an exclusivity deal on. So that's just like one of many small examples where
[14:19] this trait has tendrils into ongoing business activity, um, all the time and is a subject of
[14:25] active concern, you know, within open AI board, within open AI executive suite, um, and in the
[14:31] wider tech community. You keep saying that trait, there's a line in the story that to me feels like
[14:37] the thesis of the story. Uh, and it's a description of the trait you're describing. Uh, it's that Sam Altman
[14:42] is unconstrained by the truth and that he has two traits that are almost never seen in the same person.
[14:47] The first is a strong desire to please people to be liked in any given interaction. And the second
[14:52] is an almost sociopathic lack of concern for the consequences that may come from deceiving someone.
[14:58] I have to tell you, I read that sentence 500 times and I tried to imagine always saying what people
[15:05] wanted to be liked and then not being upset when they felt lied to. And I couldn't, I could not
[15:12] make like my emotional state understand how those things can exist in the same person.
[15:17] How, as you've talked to Sam a lot, you've talked to people that have experienced these traits.
[15:22] How does he do it?
[15:23] Yeah. You know, it's interesting on a human level because I do approach bodies of reporting
[15:29] like this with a real focus on humanizing whoever's at the heart of it and like seeking deep
[15:35] understanding, right. And empathy. When I kind of tried to approach this from a more human standpoint
[15:41] and say, Hey, like this would be devastating for me if so many people that I've worked with said,
[15:48] I'm a pathological liar. Um, how do you carry that weight? Like, how do you do talk about that in
[15:55] therapy? What is the story you tell yourself about that? You know, I got some sort of, in my view,
[16:01] maybe like West coast platitudes about like, yeah, I like breath work. Um, but not a lot of the kind
[16:09] of bracing sense of deep self confrontation that I think a lot of us would probably have if we were,
[16:16] if we were seeing this kind of feedback about our behavior and our treatment of people.
[16:21] And I think that that actually goes to the broader answer to the question too. Um, Sam asserts basically
[16:27] that this trade has caused problems, but also that it's part of what's empowered him to, um,
[16:33] accelerate open AI's growth so much that, you know, he is able to, um, unite and please essentially
[16:41] different groups of people. Um, he's constantly convincing all of these conflicting constituencies
[16:47] that what they care about is what he cares about. And that can be a really useful skill for a founder.
[16:53] Um, you know, I've talked to investors who then say, well, maybe it's a less useful skill for
[16:57] actually running a company because it's so, so much discord, but on the Sam personal side,
[17:03] um, you know, I, I think the thing that I pick up on when I try to connect on a human level on this,
[17:10] the apparent lack of like deeper confrontation and reflection and like, and self accountability
[17:17] also informs that whether you want to call it a superpower or a liability for a company preparing
[17:22] for an IPO. He is someone who, in the words of one former board member, there's this, um, uh,
[17:29] former board member named Sue Yoon, who's on the record in the piece saying that to the point of
[17:33] fecklessness is the phrase she uses. Um, he really believes the shifting reality of his sales pitches,
[17:41] or is able to convince himself of them. Or at least if he doesn't believe them, it is able to,
[17:48] uh, you know, bluster through them without like meaningful self doubt. I think the thing that
[17:55] you're talking about where you or I might, as we're saying the thing and realizing that it conflicts
[18:00] with the other assurance we've made, kind of have a moment of freezing up, um, or checking ourselves.
[18:08] I think that that doesn't happen with him. Yeah. And, you know, there's a wider Silicon Valley
[18:14] hype culture and like founder culture that, uh, kind of embraces that.
[18:20] Yeah. It's funny. The verge is built on what amounts to a product reviews program.
[18:25] Like it's the heart of what we do here is we, I hold a trillion dollars of Apple R and D once a year
[18:30] and say, this phone is a seven and it, it sort of legitimizes all of our reporting and our opinions
[18:35] elsewhere. Right. We, we have an evaluative function and we spend so much time just looking
[18:40] at the AI products and saying, do they work? And that feels missing from a lot of the conversation
[18:46] about AI as it is today. There's endless conversation about what it might be able to do,
[18:50] how dangerous it might be. And then you drill down and you say, does it actually do the thing it's
[18:54] supposed to do today? And in some cases, the answer is yes, but in many, many cases,
[18:58] the answer is no. And that feels like it connects to the hype culture you're describing and also to the
[19:04] sense that, well, if you say it's going to do something and it doesn't, and someone feels bad,
[19:08] that's fine. Cause we're onto the next thing. Like that's in the past. And in AI in particular,
[19:15] Sam is so good at making the grand promises, right? Just this week, I think the same day
[19:20] as your story published, open AI released a policy document that said, we have to rethink the social
[19:25] contract and have AI efficiency stipends from the government. And this is a grand promise about how
[19:32] some technology might shape the future of the world and how we live. And all of that relies on the
[19:38] technology working in exactly the way that maybe it's promised to work or it should work.
[19:42] Did you ever find Sam doubting AI turning into AGI or super intelligence or getting to the finish line?
[19:51] Because that's the thing that I wonder about the most. Do, is there any reflection about
[19:55] whether this core technology can do all of the things that they say it can do?
[20:00] It's exactly the right set of questions. There are credible technologists that we spoke to in
[20:06] this body of reporting. And obviously Sam Altman is not one, right? He's a business person who say
[20:13] the way that Sam talks about the timeline for this tech is just way off. You know, there's blog posts
[20:20] going back a few years where Sam is saying, we've already reached the event horizon. AGI is like basically
[20:25] here. Um, super intelligence is around the corner. We're going to be on other planets.
[20:30] Uh, we're going to be curing all forms of cancer. Like truly I'm not, uh, you know, embellishing.
[20:37] The cancer one is actually interesting that Sam is hyping up the person who
[20:40] theoretically cured their dog's cancer with chat GPT. And that simply did not happen.
[20:46] They, they talked to chat GPT and that helped them guide some researchers that actually did the work.
[20:51] But the one-to-one this tool cured this dog is not actually the story.
[20:57] I'm glad you raised that because I want to go on to this bigger point about, you know,
[21:01] when is the, both the potential and the risk of the technology really going to vest? Um,
[21:06] but it's worth mentioning these like little asides constantly that happen from Sam Altman,
[21:12] where he seems to embody this trait all over again. I mean, to use the example of the Wilmer Hale report,
[21:18] um, where we had this information that it had been kept out of writing and wanted to know,
[21:26] you know, was the brief, the oral brief along the way given to anyone other than the, the two board
[21:32] members Sam helped install to oversee it. And he said like, yeah, yeah, no, I believe it was given to
[21:38] everyone who joined the board after. And that is, we have, you know, a person with direct knowledge
[21:45] of that saying that is simply a lie. Um, and like, that really does appear to be the case,
[21:53] uh, that that is untrue, you know, and if we want to be generous and perhaps he was misinformed.
[21:58] Um, so there's a lot of these casual assurances. And I, I use that example in part because
[22:04] that's a great example of, uh, dissembling, let's call it, that can have real consequences legally.
[22:13] You know, I don't need to tell you like under Delaware corporate law, if this company IPOs,
[22:19] shareholders could under section 220, like complain about this and demand underlying
[22:24] documentation. There's already board members saying like, well, wait a minute, that briefing
[22:29] should have happened. So these things that seem to jump out of his mouth all the time,
[22:33] they can have like real market moving effects, real effects for this company and, and bringing it back
[22:38] to the kind of utopian hype language that's resurfaced, I think not coincidentally on the day
[22:45] this piece came out, um, also effects for all of us, um, because the dangers are so acute, you know,
[22:54] with respect to the way it's being deployed in weaponry, with respect to the way it's being used
[22:58] to identify, uh, chemical warfare agents, um, the disinformation potential, you know, uh,
[23:05] because of the way in which the utopian hype, um, does seem to be prompting a lot of credible
[23:11] economists to say this has all the signs of a bubble. Even Sam Altman has said, you know,
[23:17] someone's going to lose a lot of money here. That could really like crater a lot of American
[23:22] and global economic growth if there's like a true puncturing of a bubble involving all of these
[23:27] companies doing deals with each other, going all in on AI while borrowing so heavily.
[23:32] So what Sam Altman says matters. And I think the preponderance of people around him, you know,
[23:39] you mentioned, we talked to more than a hundred. It was actually well over a hundred. We had a
[23:42] conversation at the finish line where it's like, would it be too petty to say it's like this much
[23:46] higher number. And we were like, yeah, let's downplay. Um, we'll play it cool. But it was, it was so,
[23:50] so many people and such a significant majority of them saying this is a concern. And I think that's
[23:57] all why let me ask you about that, that number. And as you mentioned, people got more and more
[24:01] open with the concerns as time went on, it feels like the pressure around the bubble, the race to
[24:08] win, to pay off all this investment, to emerge as the winner to IPO. That has changed a lot of
[24:15] attitudes. It certainly created more pressure on Sam and open AI. We published a story this week,
[24:18] just about the vibes of opening. Your story is part of it, but massive staffing changes in the executive
[24:23] ranks at open AI. People are coming and going. The researchers are all headed away, largely to
[24:27] anthropic, which I think is really interesting. You can just see this company is feeling the pressure
[24:32] and it's, it is responding to that pressure in some way. But then I think back to Sam getting fired
[24:38] and very memorable. This is just memorable for me. It's memorable for no one else. But I took a
[24:43] source call at the Bronx zoo at 7 PM on a Friday. And it was someone saying, they're going to try to
[24:48] get Sam back. And then we spent the weekend chasing that story down. And I was just like, I'm at the
[24:52] zoo. Like, what do you want me to do here? And the answer was stay on the phone. Well, my daughter
[24:56] was like, get off the phone. And like, that's what I did. It was ride or die to get Sam back.
[25:00] That company was like, no, we're not letting the board fire Sam Allman. The investors,
[25:05] they're in quoted in your piece. We went to war, I think is the thrive capital position
[25:09] to get Sam back. Microsoft went to war to get Sam back. It's later. And now everyone's like, we're
[25:16] very, we're going to IPO. We're going to, we got to the finish line. We got our guy back and he's going
[25:19] to get us to the finish line. We're concerned. He's a liar. Why was it war to get him back then?
[25:24] Because it doesn't seem like anything has actually changed, right? You talk about the,
[25:28] the memos that Ilya Sutsukovar and Dario Amede kept while they were contemporaries of Sam Altman.
[25:34] Ilya's number one concern was Sam is a liar. None of that has changed. So why was it war to bring him
[25:39] back then? And now that we're at the finish line, it, it seems like all the concerns are out in the open.
[25:44] Well, first of all, sorry to your daughter and my partner and all the other people around
[25:49] journalists. It was quite a weekend for everyone.
[25:51] Um, yeah, no, it does take over one's life. Uh, and this story definitely has mine, uh, over the
[25:58] last period of time. It actually relates to this theme of, of journalism and access to information.
[26:05] I think, um, you know, the, the investors that went to war for Sam and all played roles in making
[26:13] sure he came back and the board that had been specifically designed to protect a nonprofit's
[26:20] mission to put safety over growth and to fire an executive. They couldn't be trusted with that.
[26:28] Them going away that, that was all because yes, the market incentives were there, right? You know,
[26:34] Sam was able to convince people, well, the company is just going to fall apart. Um, but the reason he
[26:38] had support was lack of information. Um, those investors in many cases now say, you know, I look
[26:49] back and, and, uh, I, I think I should have had more concerns if I had known fully what the claims
[26:55] were and what the concerns were. Not all of them. Opinions vary. And we quote a range of opinions.
[27:00] Um, but there are significant ones who were acting on very partial info.
[27:07] The board that fired Sam was in the words of one person who used to be on the board, you know,
[27:14] very JV. Um, and they fumbled the ball hard and, you know, we document the underlying, uh, complaints
[27:24] and people can decide for themselves whether it accumulates into the kind of urgent concern they
[27:28] felt it was. But, uh, that argument and that information was not being presented. They received what
[27:37] some of them now acknowledge as bad legal advice to describe it. You'll remember the quote, probably
[27:44] a lot of your listeners and viewers will remember the quote, you know, a lack of candor was what it
[27:49] was reduced to. And then they like, essentially wouldn't take calls. So you would not take calls.
[27:55] I'm sure you tried. I, everyone I know tried and it was like, you, you, and we, it, it got to the
[28:00] point where, you know, as a journalist, you're not supposed to give your sources advice. But I, I was like,
[28:05] you know, you, you, it, this will go away if you don't start explaining yourself.
[28:09] And that's what happened. And, and so you had, you know, forget journalists like Satya Nadella
[28:15] saying like, what the hell happened? I can't get anyone to explain to me. And that's the company's
[28:20] major financial backer. Um, and then you have like Satya calling Reid Hoffman and Reid calling
[28:27] around and saying, I don't know what the fuck happened. Um, and they're like, understandably in
[28:32] that void of information, looking for, you know, the, the traditional non AI indicators that would
[28:38] justify such an urgent, sudden firing, like, okay, it was, was it sex crimes? Was it embezzlement?
[28:44] And the entire subtle, but I think meaningful argument that this tech is different and that
[28:50] this kind of a steady accumulation of smaller betrayals actually could have meaningful stakes,
[28:58] both for this business and maybe for the world, that was really largely lost. And so, you know,
[29:05] capitalist incentives won out, but also the people who made it went out, were not always operating
[29:13] with complete information. I want to just ask about the, the, what everyone thought it was
[29:17] aspect for one moment. Cause I certainly saw the news and I said, oh, something bad must have happened.
[29:21] You've done a lot of Me Too reporting, famously, you broke the Harvey Weinstein story. Uh, you spent a lot
[29:26] of time reporting on these claims that I think you decided were ultimately unfounded that, uh, Altman
[29:33] had sexually assaulted minors or hired sex workers, or even murdered an opening a whistleblower. Did you,
[29:39] I mean, you are the person who can report this stuff the most rigorously. Did you, did you decide that
[29:44] came to nothing? Well, look, I'm not in the business of saying something has come to nothing. What I can say
[29:51] is I spent months looking at these claims and did not find corroboration for them. And it was striking to me
[30:00] that, you know, these guys, these companies who have so much power over our futures truly are spending a
[30:09] disproportionate amount of their time and resources in a childish mud fight. Um, you know, it's one executive
[30:16] describes it as Shakespearean. It's like the amount of, you know, private investigator money, the
[30:22] opposition dossiers being compiled, uh, it's relentless. And the unfortunate thing is that
[30:28] the kind of salacious stuff, which gets parroted by Sam's competitors as just assumed fact, right?
[30:37] There's this allegation that he, you know, pursues underage boys and you at many cocktail parties in
[30:44] Silicon Valley, you hear this. And like on the conference circuit, I've heard it repeated by like
[30:50] credible, prominent executives. Like everybody knows this is a fact. You know, I talk about where this
[30:55] comes from, right? The various vectors by which it's transmitted, Elon Musk, uh, and his associates
[31:01] seemingly pushing like really hardcore dossiers, um, that kind of amount to nothing. They're vaporous
[31:07] when you actually start to look at the underlying claims. The sad thing is that it really obscures
[31:14] the more evidence-based critiques here that I think really deserve urgent oversight
[31:20] and consideration. The other theme that really comes through in the story is
[31:24] almost a sense of fear that Sam has so many friends, he's invested in so many companies
[31:30] from his previous role as CEO of Y Combinator, just to his personal investing, someone which is in direct
[31:34] conflict with his role as CEO of OpenAI, that there's silence around him. There's one line that's
[31:40] really struck me. You describe Ilya's discoverer's memos and they're just out in Silicon Valley and
[31:44] everyone calls them the Ilya memos, but there's even silence around that, right? They're passed around
[31:48] but they're not discussed. Where do you think that comes from? Is it fear? Is it a desire to get angel
[31:54] investment? Where does that come from? I think it's a lot of cowardice, I'll be honest. You know, having
[31:59] reported on national security stories where the sources are, you know, whistleblowers who stand to lose
[32:04] everything and like face prosecution and they still do the right thing and talk about things to create
[32:12] accountability. I've worked on, you know, the sex crimes related stories that you mentioned where
[32:19] sources are deeply traumatized and fear like a very personal kind of retribution.
[32:26] In many cases around this beat, you're dealing with people with their own profile and power,
[32:32] right? You know, they're like either famous people themselves or they're surrounded by famous people.
[32:38] Um, they have robust business lives. Um, and in my view, it is actually like very low exposure for
[32:47] them to talk about this stuff. And thankfully like the needle is moving as we talked about earlier and
[32:51] people are now talking more. But for such a long time, people really just shut up about it because I
[32:58] think the Silicon Valley culture is just so kind of ruthlessly self-interested and ruthlessly business
[33:06] and growth oriented. So, you know, I think this afflicts even like some of the people who were
[33:12] involved in firing Sam, where you saw in the days after, yes, one factor that led to him coming back and
[33:20] the firing of old board members was that he rallied investors who were confused to his cause.
[33:28] But another is that so many other people around it who had the concerns and voiced them urgently
[33:35] just folded like napkins and changed their tune the moment they saw the wind was blowing the other way
[33:41] and they wanted in on the profit train. It's pretty dark, honestly, from my standpoint as a reporter.
[33:47] Some of those people are Mira Murati, who for, I believe, 20 minutes was the new CEO of OpenAI.
[33:54] She was then replaced. It was a very complicated sort of dynamic and obviously Sam came back.
[33:59] The other person is Ilya Suitskever, who was one of the votes to remove Sam. And then he changed his
[34:04] mind or at least said he changed his mind and then he left to start his own company. Do you know what
[34:07] made him change his mind? Was it just money? Well, to be clear, I'm not singling those two out.
[34:11] There's also, you know, there's other board members who were involved in the firing who also
[34:14] fell very silent after. I think it's like a wider collective problem. These are, in some cases,
[34:24] people who had the moral fiber to sound alarms and take radical action. And that is to be commended.
[34:30] And that's how you assure accountability. And that could have helped a lot of people who are
[34:36] affected by this technology. It could have helped an industry to remain more safety focused,
[34:40] meaningfully. But, you know, dealing with whistleblowers a lot and people who try to
[34:46] prompt that accountability a lot, you see that it also, it takes the fiber of sticking it out
[34:52] and standing by your convictions. And this industry is truly full of people who just do not stand by
[34:58] their convictions. Even though they think they're building digital God that will somehow either
[35:03] eliminate all labor or create more labor or something will happen. Well, that's the thing.
[35:08] So it's the culture of not standing by your convictions and all ethical concerns falling
[35:13] by the wayside the moment there's any heat or anything that could threaten your own standing
[35:16] in the business, um, is, you know, maybe all well and good to some extent for business as usual
[35:22] companies that are making whatever kind of widget. But these are also the same people who are saying
[35:27] this could literally kill us all. And, you know, again, you don't have to go to the Terminator
[35:32] Skynet extreme. Like that is a set of risks that are already materializing. It is real. They are
[35:38] right to warn about that. Um, but you know, you'd have to have someone else armchair psychologize how
[35:46] those two things can live in the same people where they're sounding the urgent warnings.
[35:50] Uh, there may be like putting a toe in and trying to do something and then they're just folding and
[35:55] falling silent. And, and that is precisely why you can have these kinds of instances of things
[36:03] being kept out of writing and things being swept under the rug and no one talking about it this
[36:08] openly for years after the fact. The natural responsible party here would not be the CEOs of
[36:14] these companies. It would be governments. In the United States, maybe it's state governments,
[36:17] it's federal government. Certainly these companies all want to be global. There's lots of global
[36:21] implications here. I watched open AI and Google and Anthropic all sort of goad the Biden administration
[36:28] into releasing an AI executive order. It was pretty toothless in the end. It just said they had to
[36:35] talk about what their models are capable of and release some safety testing. And then they all kind
[36:39] of backed Trump and Trump came in and wiped all that out and said, we have to be competitive. It's a
[36:43] free for all go for it at the same time that they're all trying to raise money from Middle Eastern
[36:50] countries that have lots of oil monies that want to change their economies. Those are politicians.
[36:55] I feel like politicians should definitely understand someone is talking out of both
[36:59] sides of their mouth and they're not going to be too upset if someone's disappointed in the end.
[37:04] But the politicians are getting taken for a ride too. Why do you think that is?
[37:07] This is really, I think why the piece matters in my view and why it was worth spending all this time
[37:13] and detail on. We are in an environment where the systems that, as you say, should be providing
[37:20] oversight are just hollowed out. And that's a post-Citizens United America where the flow of money
[37:30] is so unfettered. And it's a particular concentration of that problem around AI, right? Where there's these
[37:39] PACs, um, that are like proliferating and, and flooding money into quashing meaningful regulation
[37:46] at both a state and a federal level. You have Greg Brockman, um, you know, Sam's second in command
[37:53] directly contributing in a major way, uh, to a couple of those. And it leads to a situation where
[37:59] there just really is capture of legislators and potential regulators. Uh, and that is a hard spiral to get
[38:08] out of the sad thing is I think that there are simple policy moves, some of which are being trialed
[38:16] elsewhere in the world that would help with some of these accountability problems. You know, you could
[38:22] have more mandatory pre-deployment safety testing, which is something that is already happening in
[38:29] Europe. Um, you know, for frontier models, you could have more stringent written public record
[38:34] requirements for the kinds of internal investigations, um, you know, where we saw things being
[38:39] kept out of writing. In this case, you could have a more robust set of national security review
[38:44] mechanisms for the kinds of like middle Eastern infrastructure ambitions that Sam Altman was
[38:50] pushing. Um, and as you say, where he was kind of doing this bait and switch of with the Biden
[38:56] administration saying, regulate us, regulate us. Um, and like helping them craft an executive order.
[39:02] And then the moment Trump gets in, like truly in the very first day is just going, no holds barred,
[39:08] let's accelerate and let's build a massive, you know, uh, data center campus in Abu Dhabi.
[39:13] You could have, this is a really simple one, like whistleblower protections. There, there is no federal
[39:20] statute protecting AI company employees who disclose these kinds of safety concerns that are being aired in
[39:26] this piece. You know, we have cases where like Jan Laika, who's was a senior safety guy at the time
[39:32] leading super alignment at the company writes to the board, essentially whistleblower material saying
[39:37] the company is going off the rails on its safety mission. Um, like those are the kinds of people
[39:44] who should actually have an oversight body they can go to, and they should have explicit statutory
[39:51] protections of the kinds we see in other sectors, right? Like this is simple to replicate a
[39:56] kind of a Sarbanes-Oxley style, uh, regime. I think that despite how, uh, acute the problem is
[40:05] of Silicon Valley, assuming control of all of the levers of power, um, and despite how hollowed out
[40:12] some of these institutions that might provide oversight and guardrails are, I still do believe
[40:19] in the basic math of democracy and of self-interested politicians. And there is more and more polling
[40:26] data emerging that a majority of Americans think that the concerns or questions or risks of AI currently
[40:33] outweigh the benefits. And so I think the flood of money into AI, it's within all of our, I'm sorry,
[40:42] into politics from AI. It's within all of our power to make that a source of a question mark
[40:50] with respect to politicians. You know, when Americans go to vote, they should be scrutinizing
[40:57] whether the people they vote for, uh, especially if they are, you know, uncritical and anti-regulation,
[41:05] given all these concerns, um, you know, are bankrolled by big tech special interests.
[41:11] So I think like if people can read pieces like this and listen to podcasts like this and care enough
[41:17] to think critically about their decisions as voters, uh, there is a real opportunity
[41:24] to generate a constituency in Washington of representatives who do keep an eye and do force
[41:31] oversight. That might be one of the most optimistic things I've ever heard anyone say about the current
[41:35] AI industry. And I appreciate it. I might, I'm obsessed with the polling that you're talking about.
[41:40] There's a lot of it now. It's all pretty consistent. Uh, and it kind of looks like the more young
[41:45] people in particular are exposed to AI, the more distrustful and angry they are about it. That's,
[41:51] that's kind of the valence of all the polling. And I look at that and I think, well, yeah,
[41:54] smart politicians would just run against that. They would just say, we're going to hold big tech
[41:57] accountable. And then I think about the past 20 years of politicians saying they're going to hold
[42:02] big tech accountable. And I'm struggling to find even one moment of big tech holding,
[42:06] being held accountable. And the only thing that makes me think this might be different
[42:10] is, well, you actually have to build the data centers and you can vote against that and you can
[42:16] petition against that and you can protest against that. You know, I think there's a politician who
[42:20] just had their house shot at, cause they voted for a data center. The tension is reaching, I would call
[42:25] it a fever pitch. You've described the insularity of Silicon Valley, right? This is a closed ecosystem.
[42:31] It feels like they think they can run the world, right? They're, they're putting a ton of money into
[42:35] politics and they are running up against the reality of people don't love the products,
[42:40] which doesn't give them a lot of cover, right? The more they use the products, the more upset they
[42:45] are. And the politicians are beginning to see there are real consequences to supporting the tech
[42:50] industry over the people they represent. You talk to so many people, do you think it is possible for
[42:55] the tech industry to learn the lesson that is right in front of them?
[42:58] You know, you say it feels like they think they can run the world without accountability. I don't
[43:04] even think that needs the feels like qualifier. I mean, you look at like the language Peter Thiel
[43:09] is using, it's explicit, right? And of course that's an extreme example. And, uh, you know,
[43:16] Sam Altman, though he is like close with and informed by Thiel's ideology to some extent,
[43:22] uh, is a very different kind of person who might sound, you know, different and more measured
[43:26] up to a point. But I do think like the wider ideology that you get from Thiel, which is
[43:32] basically like, we're done with democracy. We don't need it anymore. We have so much, uh, that we
[43:37] kind of just want to like build our own little bunkers. Uh, we're not dealing with the Carnegie's
[43:42] anymore or the Rockefellers anymore where they're bad guys, but they're, you know, they feel they
[43:47] need to participate in a social contract and build things for people. There's a real nihilism
[43:52] that set in. And I do think it's just been like a mutually reinforcing spiral in recent American
[43:59] history of, uh, moguls and private companies acquiring super governmental power, uh, while,
[44:08] uh, democratic institutions that might hold them accountable are hollowed out.
[44:12] And I do not feel optimistic about the idea that those guys might just wake up one day and think like,
[44:19] huh, actually maybe we do need to participate in society and help build things for people.
[44:25] I mean, you look at like the little microcosmic example of the, the giving pledge where, you know,
[44:30] there was a moment where it was seemly to be charitable. And that moment is now like passed
[44:37] and even kind of ridiculed. That is a problem. The broader problem of lack of accountability,
[44:43] um, that I think can only be solved extrinsically. That has to be voters mobilizing, um, and like
[44:55] resurrecting the power of government, uh, oversight. Um, and you're exactly right to say that the main,
[45:04] uh, vector through which people could maybe achieve that is like, it's local. It's to do with where
[45:10] infrastructure is being built, you know, and you mentioned some of the like white hot tension
[45:14] around this that's leading to violence and threats. And obviously like nobody should be
[45:17] violent or threatening, but I, and I'm also not here to make specific, um, policy recommendations
[45:24] other than to just present, like, these are some of the policy steps that seem basic and are working
[45:28] elsewhere in the world. Right. Or that have worked in other sectors. Um, I'm not here to say which of
[45:33] those should be executed and how I do think something needs to happen and it needs to be external,
[45:39] not just trusting these companies. Cause right now we have a situation where the companies that are
[45:45] developing the tech and are equipped best to understand the risks and in fact are the ones
[45:51] warning us of the risks are also the ones with nothing but incentive to go fast and ignore those
[45:59] risks. And you just don't have anything to counterbalance that. So whatever form reforms might
[46:05] take in terms of specifics, something has to run up against that. And I, and I do still return to that
[46:13] optimism that the people still matter. Generally by argument, let me just make the one tiny counter
[46:20] argument that I think I can articulate. The other thing that could happen outside of the ballot box
[46:26] is that the bubble pops, right? That not all these companies get to the finish line, that there isn't
[46:32] product market fit for consumer AI applications. And again, I don't, I don't quite see it yet,
[46:37] but I'm a consumer tech reviewer and maybe I just have higher standards than everybody else.
[46:40] There is product market fit in the business world, right? Having a bunch of AI agents write a bunch
[46:46] of software seems to be a real market for these tools. You can read the arguments from these companies
[46:51] saying we've solved coding and that means we can solve anything. If we can make software, we can solve
[46:55] any problems. I think there are real limits to the things software can do there. That's great in the
[47:01] business world. Software can't solve every problem in reality, but they got to get there. They got to
[47:05] finish the job and maybe not everybody makes it to the finish line. And there is a crash and this
[47:09] bubble pops and maybe open AI Aranthropic or XAR, one of these companies does like fails and all this
[47:16] investment goes away. Open AI is right on the cusp of an IPO. There's a lot of doubts about Sam as a
[47:20] leader. Do you think they're going to make it to the finish line? I'm not going to prognosticate,
[47:25] but I think you raise an important point, which is market incentives do matter internally to Silicon
[47:33] Valley and the precarity of the current maybe potentially allegedly bubble dynamics do stand to
[47:44] interrupt the, you know, again, potentially, according to critics, race to the bottom on safety. I would also
[47:51] add to that if you look at historical precedents where there's a similarly seemingly impenetrable
[47:56] set of market incentives, um, and potentially deleterious effects for the public, there's impact
[48:04] litigation. And you see that as an area of concern lately. Um, like Sam Altman is out there this week,
[48:11] uh, endorsing legislation that would shield AI companies from some of the types of liability that
[48:19] open AI has been exposed to, right? And wrongful death suits, for instance. Um, of course, there's
[48:24] a desire to have that shield from liability. I think that the courts can still be a meaningful mechanism.
[48:32] Um, and it'll be really interesting to see how these suits shape up. Um, you know, you already saw,
[48:39] for instance, the class action suit actually of which I and many, many other authors I know are members
[48:46] against anthropic, um, for their use of, of books, um, that were under copyright. If there are, you know,
[48:54] smart legal minds and plaintiffs who care, we just have seen historically in cases from big tobacco to
[49:02] big energy, um, that you can also get some guardrails and some incentives to slow down or be careful or
[49:11] protect people that way. It does feel like the entire cost structure of the industry hangs on
[49:19] a very, very charitable interpretation of fair use. Doesn't come up enough, right? The cost
[49:25] structure of these companies could, could spiral out of control if they have to pay you and everyone
[49:29] else whose work they've taken, but that it's inconvenient to think about. So we, we just don't
[49:35] think about it. Right next to that is all these products are now running at a loss. Like currently today,
[49:39] they're all running at a loss. They're burning more money than they can make. At some point,
[49:43] they have to flip the switch. Sam is a businessman, right? As you've mentioned several times, he's not
[49:47] a technologist, he's a business person. Do you think he's ready to flip the switch and say,
[49:50] we're going to make a dollar? Cause that, when I ask, do you think OpenAI is going to make it? It's,
[49:54] they got to make a dollar. And so far, Sam has made all of his dollars by asking other people for
[49:59] their money instead of having his companies make money. Well, that's a big lingering question,
[50:04] you know, for Silicon Valley, for investors, for the public. Uh, you see some statements and moves out of
[50:11] open AI that seem to evince a, uh, evince a kind of panic about that. Um, you know, shutting down
[50:17] Sora, shutting down some ancillary projects, trying to zero in on the core product. Um, but then on the
[50:24] other hand, you still see at the same time, tons of mission creep. Um, right. You know, even like the,
[50:30] as a small example, it's obviously not core to their business, but like the TBPN acquisition,
[50:35] by the way, right. As we were reaching the finish line and fact checking, um, the, the company facing
[50:41] this kind of journalistic scrutiny acquires a platform where they can, you know, have more
[50:46] direct control over the conversation. Um, I, I think that there's a lot of investors that are
[50:53] concerned based on the conversations I've had that this problem of promising all things to all people
[50:59] also extends to this lack of focus in the core business model. Um, and I mean, you're closer to
[51:09] the kind of prognosticating and watching the market than I am probably, uh, I'll leave you and I'll
[51:14] leave the listeners to be the judge of whether they think open AI can flip the switch. Well, I asked
[51:20] the question, cause you've got a quote in the piece from a senior Microsoft executive. And it is that Sam's
[51:25] legacy might end up more similar to Bernie Madoff or Sam Bankman freed than Steve jobs. That is quite
[51:31] a comparison. What'd you make of that comparison? I think that's a paraphrase. The Steve jobs thing
[51:35] isn't part of that quote, but yes, there is this extraordinary comparison to, I think it's actually,
[51:40] there's an interesting sort of sobriety to it. Cause it's phrased as like, I think there's a small,
[51:44] but real chance that he winds up being a, uh, you know, an SBF or a Madoff level level scammer,
[51:51] meaning to my mind, not that Sam is being accused of those specific types of fraud or crimes,
[52:00] but that the degree of dissembling and deception from Sam may have a chance of ultimately being
[52:09] remembered, uh, at that scale. Um, yeah, I think what's most striking about that quote, honestly,
[52:16] is that, you know, you call around at Microsoft and you don't get a, like, that's crazy. We've
[52:22] never heard that you get a lot of like, yep. A lot of people here think that, which is remarkable.
[52:27] And, and I think it does go to these nuts and bolts business questions that there are investors
[52:34] who say like one told me, for instance, look, in light of the way in which this trade has persisted
[52:41] in the years after the firing. Um, this was also like, I thought an interesting sober thought,
[52:46] uh, that it's not necessarily that Sam should be at the absolute bottom of the list should be like
[52:53] the lowest of the low in terms of the people that absolutely must not build this technology.
[52:58] Um, for what it's worth, there's several people who say Elon Musk is that person.
[53:01] Yeah. Um, but that this trait puts him maybe at the bottom of the list of people that should build
[53:09] AGI, um, you know, and beneath several other leading figures, um, in this field. So I thought
[53:15] that was an interesting appraisal, you know, and, and that's the kind of thinking I think that you get
[53:19] from the, like the real pragmatists who maybe aren't buying into the safety concerns as much. They're
[53:24] just growth oriented and they think that open AI now has a problem with Sam Altman.
[53:29] The, uh, the Microsoft piece of it is really interesting. That company thought they were on
[53:34] top of the world that they had made this investment and they were going to leapfrog everyone, especially
[53:38] and most importantly, Google back into consumer good graces and the level to which they feel
[53:44] burned by this adventure. This is a very soberly run company. Uh, I don't think can be overstated.
[53:50] You mentioned the characters and the personality traits. I want to end here with a question from our
[53:54] listeners. I said on the, on our other show, the verge cast that I was going to be talking to you.
[53:58] And I said, if you have questions for Ronan about this story, let me know. So we have one here that
[54:01] I think ties in neatly with what you're describing. I'm just going to read it to you.
[54:04] How do the justifications for bad behavior, cutthroat actions of Altman and other AI leaders
[54:10] differ from the justifications Ronan has heard from other high profile leaders in politics and media?
[54:15] Don't they all justify their actions by saying, this is how the world gets changed.
[54:19] If I don't do this, someone else will.
[54:20] Yeah, there's a lot of that going around. I would say what is distinctive to AI is that the
[54:28] existential stakes being so uniquely high means both the statements of risk are extreme, right?
[54:37] You have Sam Altman saying this could be lights out for all of us. And also the kind of, you know,
[54:43] critics might say mania that the questioner is referring to is extreme. The thing that, um,
[54:51] Sam accused Elon of on the record, you know, that, that, uh, maybe he wants to save humanity,
[54:56] but only if it's him, the kind of ego component of wanting to win, which is a framing Sam uses
[55:03] all the time, um, that this is one for the history books. This could change everything.
[55:07] And therefore even above and beyond the, you've got to break a few eggs mindset of most Silicon
[55:14] Valley enterprises, there is in the minds of some figures leading AI, I think a complete
[55:20] rationalization for any and all fallout. Um, and you know, forget breaking eggs. I, I think a lot
[55:27] of the underlying safety researchers would say like potentially risking breaking the country,
[55:33] breaking the world, breaking, you know, millions of people whose jobs and safety hang in the balance.
[55:41] That's, what's unique about it. And that's where, you know, I, I close reflecting on this body of
[55:47] reporting, really believing this is about more than Sam Altman. This is about an industry that is
[55:54] unconstrained and a spiraling problem of America being unable to constrain it.
[56:00] Well, we, we had some optimism in there, but I think that's a good place to leave it.
[56:04] There's a lot coming up. And on a downbeat.
[56:05] Of course. Uh, that's every great story, really. The Musk Altman trial is upcoming.
[56:11] I think we're going to learn a lot more here. I suspect I will want to talk to you again. Ronan
[56:15] Farrow, thank you so much for being on Decoder. Thank you.
[56:17] I'd like to thank Ronan Farrow for taking the time to join Decoder and thank you for listening. I hope
[56:24] you enjoyed it. If you'd like to let us know what you thought about this episode or really anything
[56:26] else at all, drop us a line. You can email us at decoderattheverge.com. We really do read all the emails.
[56:31] Or you can hit me up directly on threads and Blue Sky. If you'd like Decoder, please share it with your friends and subscribe
[56:35] wherever you get your podcasts. Decoder is a production of the Verge and part of the Vox Community Podcast Network.
[56:39] We'll see you next time.
Transcribe Any Video or Podcast — Free
Paste a URL and get a full AI-powered transcript in minutes. Try ScribeHawk →