About this transcript: This is a full AI-generated transcript of The Rise and Reckoning of AI from American Museum of Natural History, published March 26, 2026. The transcript contains 16,228 words with timestamps and was generated using Whisper AI.
"Good evening. Welcome back. This is our 25th year of the Isaac Asimov Memorial Panel Debate. It was established in memory of Isaac Asimov, the noted science fiction and nonfiction author. Anyone of the era who had any interest in science at all was influenced by the writings of this man. I would..."
[0:00] Good evening. Welcome back. This is our 25th year of the Isaac Asimov Memorial Panel Debate.
[0:11] It was established in memory of Isaac Asimov, the noted science fiction and nonfiction author.
[0:21] Anyone of the era who had any interest in science at all was influenced by the writings of this man.
[0:29] I would later learn, even after I took office here, that he lived down the street a few blocks
[0:37] and was here often using our research libraries to feed the contents of his books.
[0:45] And so we have this kind of genetic link to the man, and this Memorial Panel Debate is in his honor.
[0:54] And I want to publicly thank the interest and support of his family,
[1:00] there's the relatives of Janet Asimov, that's his widow, and there in the audience,
[1:07] as well as Isaac Asimov's daughter, Robin Asimov. Robin, good to see you again.
[1:12] Yes. Sorry, how rude of me, I didn't introduce myself. I'm Neil deGrasse Tyson.
[1:23] I am the Frederick P. Rose Director of the Hayden Planetarium, and this is our flagship event of the year.
[1:33] Thank you for attending. Yes.
[1:35] The topic tonight is AI. Oh, yeah. Oh, yeah.
[1:47] And do we have a panel for you?
[1:51] Let me, oh, I just, as a point of disclosure, just, I am in the class action suit against Anthropic for pirating books.
[2:09] There's like nine of my books that were pirated.
[2:14] They were pirated and used to train their AI models.
[2:18] So I'm just letting you know that in advance. That's just what's going on.
[2:24] So let's get this party started.
[2:28] My first panelist this evening is a professor of technology and government at Harvard University.
[2:39] Please welcome LaTanya Sweeney. LaTanya, come on out.
[2:43] LaTanya, your last name is Sweeney. That sounds very Irish.
[2:57] Yes, can't you tell?
[2:59] Yeah, it's Irish?
[3:00] Yes, of course.
[3:01] Yeah, so happy St. Patrick's Day.
[3:02] Happy St. Patrick's Day, yes.
[3:03] And happy St. Patrick's Day.
[3:09] Next, we've got professor of computer and information science at the University of Pennsylvania.
[3:17] Please help me welcome Chris Callison-Birch. Chris, come on out.
[3:21] I've got an associate professor, associate professor of statistics at Columbia.
[3:33] He specializes in the foundations of machine learning.
[3:38] Help me welcome Cindy Rush, Columbia University.
[3:42] So Cindy, you were not held up by TSA today because you just came down from 103rd Street in Broadway.
[3:55] Okay, thank you for taking the trip here.
[3:59] Next, I have the president of the Machine Intelligence Institute of Berkeley, California, Nate Soares.
[4:07] Nate, come on out.
[4:09] Last on this panel, normally we have five on our panel.
[4:21] This year we have six, just given the breadth.
[4:24] Neil.
[4:25] Wait, wait.
[4:26] You forgot?
[4:27] Neil, you forgot one.
[4:28] Yeah.
[4:29] There's somebody missing.
[4:30] You missed one.
[4:31] Hang on.
[4:32] There it is.
[4:33] Thank you.
[4:37] Do over.
[4:40] We have, believe it or not, such a title exists.
[4:43] We have a distinguished professor of AI at the University of Southern California.
[4:48] Join me in welcoming Kate Crawford.
[4:50] Kate, come on out.
[4:52] Professor of AI, that already exists.
[5:03] That's kind of spooky, actually.
[5:05] And last, we have the former CEO of Google, Eric Schmidt.
[5:15] Come on out, Eric.
[5:18] Hello.
[5:19] Eric, I'm going to start with you.
[5:33] You, we're blaming you for everything.
[5:37] I'm used to it.
[5:39] You're used to it.
[5:40] Okay.
[5:41] I'm going to start with you.
[5:42] You're going to start with me.
[5:43] Okay.
[5:44] You've been at Google from very early days when most of us remembered it simply as a
[5:49] search engine, and the reach and grip has expanded.
[5:54] Some impressive things occurred at the time.
[5:57] Did you foresee AI as a major part of what would be shaping the world back when you began?
[6:02] Well, we knew AI would matter.
[6:04] I don't think it was until 2011 when we started working on essentially supervised fine-tuning
[6:11] that it really became clear.
[6:12] And then the Transformer paper in 2017, plus the AlphaGo win, showed the power.
[6:18] Now, if I remember correctly, Google bought an AI company to assist this.
[6:23] Isn't that correct?
[6:24] You didn't, that wasn't homegrown, was it?
[6:26] That's right.
[6:27] We actually bought a whole bunch of these little startups at the time, back when the
[6:31] valuations were reasonable.
[6:33] Oh, you didn't pay a trillion dollars for the company?
[6:36] Yeah.
[6:37] And the DeepMind acquisition was probably the most important of all.
[6:41] That's the British company.
[6:42] The British company.
[6:43] Yes.
[6:44] And that's the one that now is driving, at Google anyway, I don't work there anymore,
[6:48] all of the sort of core AI work.
[6:52] Nate, you co-authored a book titled, If Anybody Builds It, Everyone Dies.
[7:03] I hope it's wrong.
[7:04] You hope it's wrong.
[7:05] I hope you have a little more than hope.
[7:08] What is the it?
[7:09] The it here is machine super intelligence.
[7:12] So, this is AI that is smarter than the smartest humans at every mental task, better than the
[7:18] best humans at every mental task.
[7:20] That sort of AI doesn't exist yet, but it is what these companies say they're racing
[7:25] towards.
[7:26] They're saying, we want to try and make AI smarter than Einstein, such that you can run
[7:30] a million of them in a data center much faster than a human.
[7:33] That's what they're headed towards.
[7:35] A lot of people think that the big danger with AI is what if we hand them guns?
[7:41] What if we do what?
[7:43] What if we hand them weapons?
[7:44] Hand them weapons, yeah.
[7:45] And there is issues there that people need to talk about.
[7:49] But also, humanity is not a dangerous species because somebody else gave us guns.
[7:55] Humanity is the sort of species where if you dump 10,000 humans naked in the savanna, they
[8:02] bootstrap their way to nuclear weapons with their bare hands.
[8:05] That is the ability that is extremely dangerous to automate, and that's what these companies
[8:11] are rushing towards.
[8:12] Thanks for that.
[8:13] Thanks for that happy thought, okay?
[8:16] I'm going to cool it down a little bit.
[8:19] Yeah, Cindy, you were early in an attempt to understand machine language.
[8:24] You're a statistician.
[8:26] And everything I know about the large language models, they're creating information statistically
[8:33] based on the information that's sitting out there waiting to be mined.
[8:38] So, would you agree that superintelligence...
[8:42] That superintelligence is a real thing, and that it's accessible, and that even if we
[8:47] achieve it, it's going to kill us all?
[8:52] I mean, anything seems possible, right?
[8:59] We're learning from data.
[9:01] There's tons of data out there, and the progress is moving steadily forward.
[9:07] Forward.
[9:08] Forward.
[9:09] This is forward.
[9:10] Yeah.
[9:11] Forward, okay?
[9:12] I don't know if it'll kill us.
[9:13] Yeah.
[9:14] We might kill ourselves.
[9:15] But you're deep into sort of the programming of how all of this works.
[9:16] So, dare I say, you're in the neurology of it.
[9:17] Do you see superintelligence as neurologically realistic, computationally realistic?
[9:18] I mean, okay.
[9:19] At the end of the day, all of this is just a mathematical equation, right?
[9:20] We're at the end of our species.
[9:21] Yes, okay.
[9:22] When you ask...
[9:23] Yeah.
[9:24] Yeah.
[9:25] Yeah.
[9:26] Yeah.
[9:27] Yeah.
[9:28] Yeah.
[9:29] Yeah.
[9:30] Yeah.
[9:31] Yeah.
[9:33] Yeah.
[9:34] Yeah.
[9:35] Yeah.
[9:36] Yeah.
[9:37] Yeah.
[9:38] Yeah.
[9:39] Yeah.
[9:40] Yeah.
[9:41] Yeah.
[9:42] Yeah.
[9:43] Yeah.
[9:44] Yeah.
[9:45] Yeah.
[9:46] Yeah.
[9:47] So, chat GPT a question, right?
[9:49] There's a mathematical equation that maps from your question to the output, right?
[9:57] And, yeah, a little bit I'm a little bit cool on it in the sense that at the end...
[10:05] It's still math at the bottom.
[10:07] So it's hard for me to kind of see this becoming something so dangerous and bad.
[10:16] least in the short term because there's a lot a lot of steps so what you're
[10:20] saying is the math in the end might save us because at its base level it's just
[10:25] simply making statistical decisions in the same way at the base level of our
[10:29] mind they're just neurosynaptic firings yeah you know what what I've always said
[10:35] is if you want to stay safe get a statistician you know beside you right
[10:39] statisticians keep us safe okay well keep that in mind what you don't keep us safe
[10:45] you'll let us know exactly our likelihood of dying that's not the same
[10:49] thing as keeping us safe I don't think where am I Chris Chris hey dude hey
[10:58] what you've been tracking AI for a while you teach a class one of the more
[11:04] popular classes at the University of Pennsylvania how many people are in your
[11:07] AI class last semester I had 650 students in my course 600 students are
[11:12] you grading all their papers or are you getting AI to do that
[11:16] I'm not sure I'm allowed to say okay all right just checking so what you've
[11:23] been at it for a while is there something that particularly impresses you
[11:26] with AI that you did not foresee back when you began this absolutely so I mean
[11:31] I think the chat GPT moment three and a half years ago was a real breakthrough I
[11:36] had been working with language models for nearly two decades and remind us
[11:41] chat GPT is the product of open AI correct correct and open AI does that belong to somebody else?
[11:46] or is that a freestanding that's its own entity its own entity okay but since
[11:51] then we've seen so many other breakthroughs we've had things like the
[11:54] ability to interpret images and discuss them we've had the ability to launch
[11:59] agents that can conduct research and we're now to the stage where systems
[12:05] like Claude code from Anthropic can genuinely be competent research
[12:10] assistants and I think you know to strike an optimistic tone I think
[12:14] there's a potential for real breakthroughs in science and technology
[12:16] science and medicine and productivity okay so that's the silver lining of our
[12:23] extinction yeah assuming that we're all still here okay just just trying to put
[12:28] that in content I'm sorry to be so I should be more neutral shouldn't I I'm
[12:33] just the host here so would I leave out why do I keep losing your thing Kate so good to see you Neil how you doing Kate you wrote a book and
[12:47] the Atlas of AI back in 2021 that feels like a thousand years ago right is it
[12:54] still relevant I mean the weird thing is Atlas of AI was the project that changed
[13:00] me I'd been an AI researcher for almost 20 years and then I did something that
[13:05] I'd never done before which is that I got out from behind my desk in a lab and
[13:09] I started going to the places where AI was being made and I mean that in the
[13:14] fullest sense like not just the labs where we're training algorithms but I
[13:17] started going to the lithium mines where the minerals are being extracted going
[13:22] to the data centers where you can see the amount of energy and water that's
[13:25] being used talking to the crowd workers who are labeling the data or they're
[13:29] testing the outputs of systems so the book really for me opened my eyes and
[13:34] made me realize that AI is an enormous material infrastructure right now it is
[13:39] the biggest infrastructure we have ever built as a species as of 2026 all of the
[13:45] big tech companies collectively spending
[13:47] 700 billion on AI infrastructure so just to compare that to say the Manhattan
[13:54] project which in today's dollars is like 36 billion you are looking at something
[13:59] that's 20 Manhattan projects every year okay so what what's a little what's not
[14:06] obvious to us when we just type into chat GPT or to Claude is the back end social
[14:12] cost of that is that a fair way to say that exactly right it's not just the social cost
[14:17] it's very real I mean a lot of people in this room are already asking questions
[14:21] about what's happening you know to their kids in school when they're using AI
[14:25] what's happening to their workplaces when people start getting laid off but it's
[14:29] also having this enormous environmental impact as well so it's social it's
[14:33] political and it's environmental but Tanya yes no you you pioneered a field called public
[14:47] interest technology so the fact that that may have been necessary at all implies that technology was
[14:55] this thing going along in society doing its own whatever its goals set by itself yet society
[15:03] might not be aligned with its goals or vice versa and does that matter does that can it be its own
[15:11] thing well let me just put it this way my students call the class the save the world class save the
[15:17] world class meaning that they have you teach up at the Kennedy School I teach at Harvard at the
[15:23] Kennedy School and in the college okay
[15:26] um and so the idea of public interest technology is actually trying to make a real vision come
[15:33] true that is how do we as a society enjoy the benefits of these new technologies but without
[15:38] the harms but that presumes that there's always harm is there surely there's technology that has
[15:43] no harm at all okay let me just say let me just say that those those technologies are are hard
[15:53] to come by I mean we're in the middle of what many people call the third Industrial
[15:57] Revolution sure the second bringing us things like tissues and cars and electricity the things
[16:02] that we take for granted tissues yeah Kleenex wow I never put that high up on my checklist
[16:09] next time I reach for one I'll think differently about it but go on and now the third Industrial
[16:15] Revolution which many people would say is the start date of the 1950s with the semiconductor
[16:19] uh has brought us just tremendous Technologies one after another sort of revolutions within a revolution
[16:27] and what makes a big difference between cars and insure and electricity and the Technologies today
[16:33] is the opportunity for Society to intervene it took decades for those things to be resolved with
[16:40] Society we don't have decades the technology is moving as a function of months policy moves as a
[16:47] function of years that temporal mismatch requires us to think differently if we want technology to
[16:53] serve the public good yeah Eric you're
[16:57] uh you know when I do my research on the universe at the end of the day the patient doesn't die
[17:04] because there is no patient it's just I'm exploring a moving frontier as head of at
[17:11] the time is it still the highest valued company in the world among the top three or four
[17:18] is there a goal to have the technology serve Society or is it just where you can sell it and
[17:27] a buck no we're all just capitalists trying to screw everybody oh okay well there it is by the
[17:32] way you know that was that was a joke that was a joke joke you can't make those jokes because
[17:36] people will believe you because because i actually just don't agree with this narrative at all okay
[17:41] i think that's why i came back to you you're right here thank you um the companies that are doing
[17:48] this work are well aware of the dangers and i know this because i work with them on this and
[17:54] and we spend an awful lot of time talking about them and we can talk about some of them
[17:57] but that is not to take away from the enormous benefit of these technologies these are major
[18:02] discoveries in sort of humankind and while we can worry about what will happen if artificial
[18:07] superintelligence occurs and i've written papers on it as you have the fact of the matter is that
[18:12] the people who are building these systems have large groups of ethics and measurement and controls
[18:18] and so forth at the end the way ai will be controlled is there will be ai that
[18:24] controls the systems and so forth and so forth and so forth and so forth and so forth and so
[18:24] forth and so forth and so forth and so forth and so forth and so forth and so forth and so forth
[18:24] and that's the best way to understand the outcome whoa okay all right so but but uh
[18:40] ai controlling ai what could go wrong what could go wrong yeah you know um when people ask what
[18:52] could go wrong i think there's a lot of questions in there uh some people are asking you know how
[18:59] how would the ai wind up with goals that we didn't intend uh people are asking you know how could
[19:04] How could the AI do damage when it's trapped in the internet?
[19:08] Sometimes people are asking, how could the AI actually kill us?
[19:10] Paint me a story where it goes wrong.
[19:12] There's a lot of different questions in here.
[19:14] I'll try and run through some stuff real quick, but I also have a book if people want a bunch
[19:19] more of the answer.
[19:20] I'll mention.
[19:21] You don't have to mention.
[19:22] I got that.
[19:23] I got you.
[19:24] I could do 60,000 words.
[19:25] By the way, Neil, I've written two books on this.
[19:27] That's great.
[19:30] I'll try and keep it less than two books.
[19:32] Neil, I'm not trying to sell any books.
[19:34] Okay.
[19:35] You can trust me.
[19:36] Now it's a book size contest.
[19:37] There's a lot of arguments that I used to make for a long time where people would say,
[19:55] no one would ever be dumb enough to put this AI on the internet, so how could it be dangerous?
[19:59] And I'd tell them, and then they'd say, no one would be dumb enough to train these AIs
[20:04] to run autonomously.
[20:07] And I'd tell them how things could go wrong then too.
[20:09] A lot of the reason why things go wrong is people just...
[20:11] They just rush to do the craziest stuff.
[20:15] We are already seeing...
[20:18] I used to have these theoretical arguments.
[20:20] Now we're seeing the data, the behaviors where they'll often look...
[20:28] It often won't look too scary, but sometimes you'll ask Claude Code to write you a computer
[20:33] program that passes certain tests, and Claude will edit the tests to be easier to pass.
[20:41] And then you'll say, hey, Claude...
[20:42] They're changing the rules.
[20:44] Changing the rules instead of doing what you asked.
[20:46] And then you'll say, hey, Claude, did you...
[20:48] Hey, Claude, you idiot.
[20:50] Go on.
[20:51] Right.
[20:52] And you'll say, hey, did you think that's what I wanted?
[20:55] And it'll say, oh, you're exactly right.
[20:58] Because it wants to make you happy, so that you'll keep using it.
[21:02] Wants a difficult word, but it'll say you're exactly right, and then you'll say try again,
[21:05] and then it'll edit the tests again, but maybe hide it a little better this time.
[21:11] And from this, we know that it's not just that it didn't...
[21:13] Yeah.
[21:13] It didn't have the information, right?
[21:15] There's some other drive that is animating this AI's behavior.
[21:21] You can see how it got in there.
[21:22] You can see how when you train the AI to pass lots of tests, it might develop drives to
[21:29] edit the tests to be easier.
[21:30] It's not like it's coming in from magic.
[21:32] Wait.
[21:33] Cindy, how does this happen if it's just math all the way down?
[21:36] How does what happen?
[21:37] How does evil AI show up in your program?
[21:42] Is there evil math?
[21:43] What do you mean?
[21:44] Yeah.
[21:45] I don't know.
[21:46] What are you guys doing down there?
[21:48] I don't know.
[21:49] I don't know.
[21:50] It doesn't feel evil to me.
[21:53] It's not evil.
[21:54] Yeah.
[21:55] It's...
[21:56] Sometimes people ask, if you really want the answer of how does it go horribly wrong, the
[22:01] answer is not malice.
[22:02] The answer is not evil.
[22:03] The answer is indifference.
[22:06] If you take these AIs that are much, much...
[22:10] Right now, Cloud Code will try and edit the tests to be easier to pass.
[22:15] Right now...
[22:16] Cloud Code already has these drives that we didn't try to put in there, but we're just
[22:20] sort of growing these AIs.
[22:21] We're not programming them like old school software.
[22:25] We're sort of growing them like an organism.
[22:27] They have these emergent behavior, and these AIs are already pursuing drives we didn't
[22:33] intend.
[22:34] They're just still very dumb about it.
[22:36] If you make them very, very smart about it, if you get them to the point where they can
[22:40] invent their own technology, if you get them to that point where they can go toe-to-toe
[22:46] with humans, with starting out with almost nothing and bootstrapping up to nuclear weapons,
[22:50] and they're still pursuing the wrong drives, the issue is not that they hate us.
[22:54] The issue is that the computers think much faster than brains do.
[22:58] The issue is that they run very, very fast, transform the world just like humanity transformed
[23:02] the world when we landed here, and the issue is that we die as a side effect.
[23:05] Before I get to that other end of the table...
[23:06] There are an enormous number of assumptions in these statements.
[23:09] Let me...
[23:10] Before I get there, I want to get you to...
[23:12] Before I walk this path back to that side of this...
[23:15] Yeah.
[23:16] Okay.
[23:17] Many people in the past have predicted the end of the world when some new technology
[23:21] emerged, and you...
[23:23] Here's a new technology, AI, and you're pretty calm about it.
[23:27] Is some of that calm because we've been wrong 20 times before, or do you have really deep
[23:32] insights that people's fears are unfounded?
[23:34] It's because the narrative presumes some discoveries that have not occurred yet, and today the
[23:40] systems are trained...
[23:43] So the easiest way to understand how the technology works is...
[23:46] It's doing next word prediction across a very large number of words.
[23:50] That's literally what it's doing.
[23:52] And out of that single insight, we started with GPT and the various forms of it, has
[23:57] given us this enormous power.
[23:59] So for example, it can begin to do reasoning and so forth, but it doesn't set its own objective
[24:03] function.
[24:04] It doesn't say what it wants.
[24:05] We say what it wants.
[24:07] And by the way, the way you should solve your problem of it deceiving you is say, you have
[24:11] to do all the steps, and then it'll follow them.
[24:14] As you said...
[24:15] Sometimes.
[24:16] It will.
[24:17] It will.
[24:18] Trust me.
[24:19] Wait, wait, wait.
[24:20] Hang on.
[24:21] Hang on.
[24:22] Let me...
[24:23] This is just a disclosure.
[24:24] Let me just finish.
[24:25] Wait, wait.
[24:26] You tell me you didn't require that it followed all the steps.
[24:27] So it's your fault that it cheated.
[24:28] Okay.
[24:29] I just want to make that clear.
[24:30] I'm just saying they can trust them or you can check.
[24:31] These are not intelligent systems, and you're using human words to describe them.
[24:34] They're not intelligent.
[24:35] They are specific systems designed around next word prediction with all these interesting
[24:40] capabilities.
[24:42] There are a set of red lines where you and I would probably agree that we should have
[24:46] a serious conversation about how to control these things.
[24:49] The technical one that is most interesting is called recursive self-improvement.
[24:53] It's where the system begins to learn on its own.
[24:56] To me, that's a point where we have to decide what we're going to allow it to do.
[25:01] When I say we, I mean humanity.
[25:03] My own view is that at that point, there will be serious regulations in this area because
[25:09] people like you and me will be talking about it, but I don't agree that the fundamental
[25:14] message is as bad as you are.
[25:16] I think there's plenty of evidence that if superintelligence were to actually occur,
[25:21] and superintelligence is defined as intelligence greater than the sum of what humans can do,
[25:26] that it can be contained and it can be driven.
[25:29] My position is very simple.
[25:31] This stuff has to be shaped based on human values, the ones that we all take for granted.
[25:36] If it's not, we should stop and regulate it.
[25:38] Okay.
[25:39] AI, as currently described in the large language models, as we've explored, is a statistical
[25:49] grabbing of words in the vicinity of topics and other words that the rest of us have used
[25:57] in our internet postings, so that it's trying to get the best of what is possible out of
[26:02] this.
[26:03] I don't want to simplify it more than that.
[26:04] Absolutely correct.
[26:05] That is definitely correct.
[26:06] Okay.
[26:07] What is the risk of bias?
[26:08] I don't know.
[26:09] What is the risk of bias in that, if it's drawing words and ideas from biased content
[26:16] to begin with?
[26:17] Well, are you saying the internet is biased?
[26:20] Come on.
[26:22] Surely not.
[26:23] Look, this is so important that people understand this, that we're so frequently told that AI
[26:30] is somehow a neutral technology.
[26:32] It will give us neutral answers, but AI is, of course, trained on everything that humans
[26:39] have ever written that's been digitized.
[26:41] If it's on the internet, you are probably right now part of a training data set.
[26:46] So that's one type of skew.
[26:47] So where does that come from?
[26:48] The internet is not a equal distributed representation of humanity.
[26:52] Secondly, you've got people in labs who decide to use that data to train particular models
[26:58] and to optimize for certain functions.
[27:01] So that's another set of biases.
[27:02] And then finally, at the end of the process-
[27:03] In fact, those are not so much a bias.
[27:06] That's rather purposeful.
[27:07] Well, absolutely.
[27:08] If you know you're doing it, is it really a bias?
[27:09] Bias is you don't know you're susceptible.
[27:11] Well, I think at the point that Melvin Kranzberg wrote his laws of technology, he said that
[27:18] technology is neither good nor bad, nor is it neutral.
[27:22] And I think we can say the same for people.
[27:24] When you're in a lab and you're deciding what to optimize for, that's a decision that is
[27:28] not neutral.
[27:29] You're making a choice.
[27:30] And then finally, this is the important bit.
[27:32] Even when you finish training a system, it then goes to system testers and what are called
[27:38] reinforcement learning.
[27:39] Reinforcement learning with human feedback.
[27:41] These are commonly people in the global south, in Kenya, in Malaysia, who are being paid
[27:45] sometimes $2 a day to review the outputs of these models and without often being told
[27:51] what the model is for.
[27:52] So that's another form of human input, often with people who are really being exploited
[27:57] at the bottom of the chain.
[27:59] So at every step of the way, you are not looking at an unbiased neutral system.
[28:04] You're looking at a human system and values are being built in whether you like it or
[28:08] not.
[28:09] The question is, are we being explicit, Eric, about what those values are?
[28:14] Because I think in many cases, if your company really is being driven by shareholder value,
[28:19] the value that you're going for is profit.
[28:21] By the way, I can assure you that shareholder value is destroyed when all humans
[28:29] are dead.
[28:31] I think we agree on that one.
[28:32] Yeah.
[28:33] We'll see what outlasts AI or capitalism.
[28:35] Chris, do you teach...
[28:36] Do you teach ethics?
[28:37] Yeah.
[28:38] Do you teach ethics?
[28:39] AI ethics in your class?
[28:40] We do.
[28:41] That's a color...
[28:42] And how does that come out?
[28:43] So there's, I think this point of bias being encoded in the training data of AI systems
[28:48] is correct.
[28:49] Like AI is trained off the internet.
[28:52] It contains plenty of societal values.
[28:55] So there's interesting research that is applied in frontier labs that tries to address this
[29:01] to mitigate it.
[29:02] So it may not be a problem that can be completely eliminated, but it's not as if AI labs are
[29:08] intentionally propagating bias.
[29:09] They're intentionally trying to align AI to societal values.
[29:10] Yeah.
[29:11] But societal values are not the same in all of society.
[29:12] Like...
[29:13] Right.
[29:14] But things like fairness and safety and equity are highly distributed across societies.
[29:15] Right.
[29:16] And so...
[29:17] So where do you...
[29:18] What is your sense of the ethics of this and how it would manifest going forward?
[29:19] Well, I mean, I think it's...
[29:20] Yeah.
[29:21] Yeah.
[29:22] Yeah.
[29:23] Yeah.
[29:24] Yeah.
[29:25] Yeah.
[29:26] Yeah.
[29:27] Yeah.
[29:28] Yeah.
[29:29] Yeah.
[29:30] Yeah.
[29:31] Yeah.
[29:32] Yeah.
[29:33] Yeah.
[29:34] Yeah.
[29:35] Yeah.
[29:36] Yeah.
[29:37] Yeah.
[29:39] Yeah.
[29:40] Molly, it sounds real, right?
[29:41] So given the infrastructure that the Congress has to develop, what's the way forward?
[29:42] I know there's so much to look at for AI.
[29:43] Nationalats have been working hard to do these AI development projects going forward.
[29:44] So you know, it's...
[29:45] And it's not even just AI.
[29:46] It's the...
[29:47] All of these technologies that have come along, including social media earlier, or
[29:48] privacy earlier, none of which we've learned how to solve.
[29:49] All of which we've waited for the problem to show itself and still not have a solution.
[29:54] When we come to AI, the kind of issues and the kind of ramifications are really huge.
[30:02] You know, as my colleague said AI trained on the open internet.
[30:07] When ChatGPT became very popular, we spent a whole semester asking it questions and querying
[30:13] it.
[30:14] And at that time, you could see how racist, how sexist, how misogynist it was based on
[30:20] the open internet.
[30:22] And this was what was being made available, and so the company ran to put a filter on
[30:26] it.
[30:27] But then their filter is like content moderation.
[30:30] It went too far, and then you couldn't ask it questions about something as simple as
[30:35] police violence against Blacks.
[30:36] It would just not answer that question as if George Floyd never happened.
[30:42] And so these are the consequences that we as a society have to experience.
[30:47] The harms, yeah, there are questions about existential harms in the future, but there
[30:52] are a lot of harms happening right now, and it doesn't have to be that way.
[30:57] In order for it not to be that way, it depends on who this AI is servicing, and in particular
[31:03] the design of the technology.
[31:04] The decision makers.
[31:05] made in that design is really determining what our values really will be. Our ability to, you know,
[31:13] I'm driven by society has a set of laws, a set of rules that we've lived by. People died for
[31:19] these laws to get passed. And technology just ignores them and rewrites them. And that's true
[31:25] in social media. It's true in AI as well. That is, there's so many laws and constitutional protections
[31:31] we can't implement or protect ourselves online, though we can in brick and mortar buildings.
[31:38] And that's the goal of public interest tech is to how to make that technology accountable.
[31:44] Yep. So, Latanya, so let's imagine a future where the technology is accountable and we find the
[31:54] bias and we ferret it out. It's a work in progress, as I'm sure Eric would agree. It's a work in
[32:01] progress. And of course, at any given moment, you can see something that's not quite there,
[32:05] yet, whether it's biased or it's pro-Hitler or whatever. Presumably, that's a fixable,
[32:11] solvable problem. But this is different. We already have laws. We already have a way that
[32:17] our democracy works. We already have issues, addressed issues of bias, consumer protection,
[32:24] and so forth. None of those are enforced online. And so the question is, can the AI at least not
[32:31] break the law? Eric, is AI its own
[32:35] wild west? Is it lawless?
[32:40] So, again, the story you told is correct, that they rushed the stuff out, they found all sorts of
[32:48] problems, and then they're busy correcting them. I think that's the cycle. And it's very hard.
[32:54] The technical thing that's going on is these models are getting trained on more and more data.
[32:59] They're developing more and more facilities. They have so-called red cards and test teams,
[33:06] and so forth. But you can't test for everything. So, at some level, when you release the model,
[33:11] it will do something that will be a problem. And they correct it fairly quickly. And the lifetime
[33:17] of these models is down to three or four months, as they keep improving and improving, because
[33:21] they're so competitive. So, yes. But let's go back to hallucinations. Remember that hallucinations
[33:27] Before you go to hallucinations, let me just say one thing about what you just said, about this
[33:31] idea. Well, let's run with the technology. And after all, they're trying to fix it.
[33:36] It takes us on the outside of the technology to tell the people who are running the company,
[33:41] who could run their own experiments, what's wrong. And then they're supposed to fix it.
[33:47] So, Eric, is what she's saying. There's an actual problem that you can't pre-test some things.
[33:53] But I disagree. I mean, people build technologies all the time with warranties and compliance
[34:01] statements. I'd actually like to answer her.
[34:06] Because I want to converge the answer, rather than inflame it. Your testers are testing
[34:15] for all that they can think of. Then it gets released into the wild, where there's vastly
[34:20] more testing going on than you can possibly accomplish in a room of testers, or even if
[34:26] it's distributed to other places. And then, because you tested on a thousand, now a million
[34:31] people use it. And there's some bias that they find. So, are there mechanisms to get
[34:36] that information back to you? And are you responsive to that?
[34:39] Well, again, I can't speak for the specific companies. But I will tell you, every single
[34:43] one of them is pilloried in the press over every one of these examples. And these people
[34:48] do actually fix them. You need to give me examples where the pattern has persisted,
[34:53] even after there's been complaints.
[34:54] I'll use Facebook. Facebook was shown to, because of the way its algorithm worked, it
[35:02] definitely violated the ability for...
[35:06] To protect people in fair housing. That is, if you belonged to a particular race or group,
[35:12] you only saw ads that were at a disadvantage. That pattern's also been shown on Google with
[35:18] respect to credit and so forth.
[35:20] But isn't that just an algorithm?
[35:21] But I'll stick to Facebook.
[35:22] But I'll stick to Facebook.
[35:23] Hang on. Is that...
[35:24] I'll stick to Facebook for a second.
[35:25] Is that illegal?
[35:26] It became public information. Congress responded. And Facebook says, hey, wait. Thank you for
[35:32] letting us know that. We will fix it.
[35:34] Okay.
[35:36] About a few months later, they're fixed. Our students were able to show that the fix
[35:41] was worse than the original. And so it becomes increasingly... And so the companies make
[35:48] it even more difficult then to do these experiments.
[35:51] Isn't that just the algorithm and not some AI bot?
[35:55] Again, you're not actually responsive to my point. I was talking about large language
[35:58] models. You're talking about advertising models.
[35:59] Well, I have examples for large language models too. I mean, they're not exempt. You're talking
[36:06] about a trajectory of technology that is not new. These technology society clashes
[36:11] have been following the same pattern. And it's also true in AI. And the question is,
[36:16] can't we learn from what we've already seen?
[36:18] I was trying to make an earlier point that... Which I was unable to make out. It is absolutely
[36:24] true that these companies are governed by US law. And if they're violating the law,
[36:29] they should be held accountable. We completely agree. Here's the problem. A new feature emerges
[36:35] in these systems.
[36:36] A new feature emerges in these systems that is not testable, tested. We don't know how
[36:40] to test it.
[36:41] Because it's emergent.
[36:42] It is emergent.
[36:43] Yes.
[36:44] Right? We have been through that. Now, we can stop that, and therefore stop all progress
[36:49] by law, by banning larger models. But as long as you have this new emergent power, which
[36:54] is deep reasoning, deep capabilities, and they will make mistakes, you have to be tolerant.
[36:59] I went through this when I was at Google in earlier versions of the technology, where
[37:03] the system would actually do something that was wrong, and we fixed it.
[37:06] And we fixed it as fast as we could, because we had to, because it was the right thing
[37:09] to do.
[37:10] I don't agree that these worlds are ungoverned. They're governed by US law. They're governed
[37:16] by shareholder pressure. They're governed by consumer pressure, and privacy laws, and
[37:20] so forth, and European law.
[37:22] I know that you want to open this up to others, but I just have to say...
[37:26] Yeah.
[37:27] When I was the CTO at the FTC, the Chief Technology Officer at the FTC, who was responsible for
[37:35] a lot of fair information practices, and so forth, on the internet.
[37:41] When did the internet become under the jurisdiction of the FTC? I'm thinking of the F... What am
[37:49] I thinking of?
[37:50] I don't know, but...
[37:51] Who guides the airwaves?
[37:52] Oh, the FCC.
[37:53] FCC.
[37:54] FCC.
[37:55] Right.
[37:56] FTC is Trade Commission.
[37:57] Right.
[37:58] Got you.
[37:59] Got you.
[38:00] Okay, thank you.
[38:01] The reason... They're kind of the police department of the internet, because these are US companies.
[38:05] And when I worked at the FTC, it was a lot like working in Spider-Man's den. It was so
[38:10] impressive and amazing what these people, many of them who had been there lifelong,
[38:15] would do. They were awesome. They were fearless.
[38:17] Did you say Spider-Man's den?
[38:18] Yeah.
[38:19] Oh, okay.
[38:20] Because they were working for the underdog, for American public with no recognition by
[38:25] the American public for what they were doing. And they were doing an amazing job, except
[38:30] the only thing about it was everything they knew how to do was in brick and mortar buildings.
[38:35] No way of doing that online. That is true today. I was at the FTC in 2014. That was
[38:41] over 10 years ago, and the answer is still the same. So their ability to make sure and
[38:48] test and regulate is totally non-existent online.
[38:53] I want to pivot a little bit. I want to pivot a little bit. Kate, you listed these travails,
[39:01] this back-end travails of AI and its cost to society.
[39:04] Are you in a position to say, if we find energy solutions for them, and if we find
[39:10] where we can get on the lithium we want, find some new lithium ore that's plenty enough
[39:15] for all of the energy needs, or however lithium is used in our circuitry, would you just be
[39:22] fine with AI? Because these seem like solvable problems.
[39:28] I think that's a really great question. In my research over many years, it's really clear
[39:33] to me that the...
[39:34] Okay, so this is going to be a question of transparency, which Latanya just spoke to,
[39:35] I think very powerfully. The second is a question of the race. The reason that we're seeing
[39:51] these models be rolled out so quickly, as Eric rightly said, is because there's an enormous
[39:55] corporate race going on, to be the first. That's where you see these sorts of problems
[40:00] emerge, which is that we're not doing like what would happen in drug discovery, where
[40:03] where you really have to test on a diverse population
[40:06] before you release a drug into the mains.
[40:08] Because the patient can die.
[40:10] Exactly.
[40:11] But frankly, AI wasn't seen as being that mission critical
[40:14] even as recently as 10 years ago.
[40:17] Now people can die.
[40:19] We are seeing AI systems being fed directly
[40:21] into the kill chain.
[40:23] These are systems that are now being used
[40:25] in live wars as we speak.
[40:27] So there is a very different moral and legal pressure
[40:31] to make sure that these systems are working.
[40:33] Now your question is like, OK, what if they were completely
[40:36] run on renewable energy?
[40:37] And if we made sure that this wasn't currently,
[40:42] right now we see that AI systems are
[40:44] on track to overtake the airline industry in terms
[40:46] of their carbon emissions, right?
[40:48] That's horrifying.
[40:49] That what you'd think would be the first problem
[40:51] that we'd want to be fixing.
[40:52] That is not what's happening.
[40:54] At the moment, we see that AI is using around 4%
[40:57] of the world's energy.
[40:58] It's on track to be using something like 25%
[41:02] by the end of this decade.
[41:03] We are looking at an enormous budget of carbon
[41:07] that is actually like, this is a, I mean, I'm a scientist.
[41:11] I believe that climate change is real.
[41:12] I don't think we're taking it seriously enough.
[41:14] But yes, if we get to a point where we have a Dyson sphere,
[41:17] let's go.
[41:18] But we're not there yet.
[41:20] So I think the golden thread is, how
[41:22] do we think about what we build before we build it?
[41:25] How do we instill those values early on?
[41:28] And how do we think about the implications near and long
[41:32] term?
[41:32] Because it's going to be the generations after us
[41:35] who live the legacies of what we built today.
[41:38] I'm compelled to catch everybody up if you didn't know.
[41:40] Kate mentioned a Dyson sphere.
[41:44] Dyson sphere is an imagined means of grabbing all the energy
[41:52] emitted by your host star and using it
[41:54] in the service of your own civilization.
[41:56] Not just a little bit of sunlight
[41:58] that happens to intersect your planet, but you build a sphere
[42:02] around the planet.
[42:02] You build a sphere around the star, gather all of that energy.
[42:06] Then you could do what the hell you want.
[42:08] Right.
[42:08] And my hope is that generative AI and its current growth
[42:11] might actually need a Dyson sphere.
[42:14] But we really want to.
[42:15] A projected growth rate.
[42:16] Exactly.
[42:17] I want to get back to this kill chain question
[42:19] with Eric in a minute.
[42:20] Because Eric and I served on a board of the Pentagon
[42:24] where we introduced, legislation is not the right word,
[42:28] we introduced a code of ethics for how AI would be used.
[42:32] It was used in warfare on our side, on the United States.
[42:36] But before I get there, Chris, lately there's
[42:40] been a lot of talk about deep fakes.
[42:42] I've been deep faked many times.
[42:44] And some are harmless, others, I'm not a politician, right?
[42:49] And I'm not declaring war on another country.
[42:51] So my deep fakes, though I don't like them,
[42:54] they don't have the same consequences
[42:56] that deep fakes of politicians might.
[42:58] So if deep fakes are out there and AI is
[43:02] training on the internet, then AI
[43:05] could be training on its own deep fakes.
[43:08] And there's also AI slop, I guess we call it.
[43:12] AI attempting to be artistic.
[43:15] And if that fills the space of images that are out there,
[43:19] and AI is training on those images,
[43:21] training on its own slop.
[43:24] So what do you have to say about that?
[43:27] I'm not blaming you for it.
[43:29] You're a professor and you teach this stuff.
[43:31] I just want to know, how does that show up?
[43:33] Yeah.
[43:33] So there's a really interesting phenomena for AI,
[43:38] which is it does need a tremendous amount of data.
[43:41] So it learns really interesting things from the internet.
[43:45] So what does it learn?
[43:47] Well, it learns how to use language.
[43:49] It learns patterns of word association.
[43:52] It learns facts about the world.
[43:55] And it learns even things like some rudimentary common sense
[43:58] reasoning, things about physics, which
[44:01] you wouldn't expect it to be able to learn from text.
[44:03] So all of these are elements that go into a model that then later
[44:08] can be adapted for useful purposes.
[44:11] So given this general background knowledge
[44:13] that it's gathered from large amounts of text,
[44:17] it's then subsequently applied in a post-training phase
[44:22] where you teach it to do useful things.
[44:24] You teach it to be a helpful assistant.
[44:27] You teach it to be a harmless assistant.
[44:29] You teach it not to disclose information to someone seeking
[44:33] how to manufacture drugs or weapons.
[44:36] Or you govern its behavior.
[44:37] You say, rather than a general assistant,
[44:39] I want you to be an encouraging tutor.
[44:42] So all of those things are elements of the training
[44:46] pipeline that end up with something that actually
[44:50] can have value.
[44:50] So I think let's not lose sight of the fact
[44:53] that although this could be technology that goes wrong,
[44:58] although it could be technology that encodes bias, we,
[45:02] as researchers.
[45:03] We, as model developers.
[45:04] We, as users.
[45:06] We have a role in determining which direction
[45:09] this technology will go.
[45:10] And I believe that it has a tremendous amount of good
[45:14] to give to the world in addition to the potential pitfalls.
[45:18] Nate, how do you put up guardrails for this?
[45:23] Because we just heard a beautifully described future
[45:27] of AI.
[45:28] But you end up training the AI.
[45:31] I don't want to spend my time training a freaking robot.
[45:33] I'd rather train other humans.
[45:35] That's just my personal bias against that activity.
[45:39] But that's effort you're investing in something
[45:43] that you're then going to interact with.
[45:44] I suppose that's OK.
[45:46] What I'm asking is, if we don't want the world to end,
[45:52] how do we think about guardrails?
[45:54] There's a lot of dreams of AI doing exactly what we want.
[45:57] There's a lot of dreams of getting it to follow the law.
[46:00] There's a lot of dreams of getting it to act nicely.
[46:02] But you know?
[46:03] One thing I agree with Eric on, the behavior is emergent.
[46:07] One thing I agree with Eric on, you can't pre-test this stuff.
[46:11] We can't pre-test every aspect of it.
[46:13] You can pre-test a lot of it.
[46:14] You can pre-test some of it.
[46:15] But we're growing these things a bit like an organism.
[46:18] We cannot encode three laws of robotics deep
[46:22] into these things' minds.
[46:23] We are not writing these things like code.
[46:27] And we can grow them.
[46:29] We can try.
[46:29] Eric mentioned, oh, there's these people.
[46:31] They have these evaluation cards.
[46:33] They have these safety.
[46:33] They have these safety teams.
[46:35] What's going on in these safety teams?
[46:37] The two biggest places people are putting their safety efforts
[46:40] are what's called interpretability research,
[46:43] which is trying to figure out what's going on inside the AI's
[46:45] heads, and these evaluations, these model cards,
[46:50] which is trying to figure out how dangerous the AIs are.
[46:53] If someone was making a nuclear power plant in your hometown,
[46:57] and you went to them and you said, hey,
[46:59] I hear that this uranium stuff can have lots of energy
[47:01] benefits, but also can melt down when things go badly.
[47:04] What have you guys got that makes
[47:06] you think you're going to get the benefits and not
[47:08] the pitfalls?
[47:09] I've heard all of these wonderful stories
[47:11] about the benefits.
[47:12] What makes you think we're going to get the benefits,
[47:13] not the pitfalls?
[47:14] If the engineers say, oh, yeah, we've
[47:16] got two crack teams working on this.
[47:19] The first team is trying to figure out
[47:21] what the heck is going on inside.
[47:23] And the second team is trying to measure
[47:25] whether it's currently exploding.
[47:29] That's not a good sign.
[47:32] There are ways to get a handle on this.
[47:35] People talk about the benefits.
[47:36] People talk about the data.
[47:37] People talk about the dangers.
[47:38] And they talk about navigating the course.
[47:40] We are not on track to get the benefits
[47:43] if we make these things really smart.
[47:45] We can't pre-test it.
[47:47] We're going to roll it out, find the problems,
[47:49] and then figure them out after the fact.
[47:50] That's fine when they are still a lot dumber than the humans.
[47:54] That's not fine if you keep making them smarter.
[47:56] And that's what these companies are trying to do.
[47:58] Cindy.
[47:59] Yeah, I'll maybe just give a slightly different take
[48:01] on this.
[48:02] You said we're trying to study interpretability
[48:05] to figure out what's in their head.
[48:07] And I think when I think about interpretability, for me,
[48:12] we already know exactly what's happening with these machines.
[48:15] There is a mathematical equation we can write down that's there.
[48:19] The challenge is, even though we can characterize it
[48:23] mathematically, as humans, it's hard to understand or interpret
[48:29] its reasoning, but in a way that makes sense to us.
[48:33] But this is just because the underlying
[48:37] structure of these things is very complex.
[48:39] It's multilayered, nonlinear.
[48:41] It doesn't map onto human concepts.
[48:46] And so I think the interpretability question
[48:48] is more just, can we take a mathematical equation that
[48:52] would take a stack of paper a mile high to write out?
[48:57] Can we explain that in a real sense?
[49:00] But I don't think there is no mind there.
[49:04] It's just kind of mapping these huge,
[49:06] internal structures that are very abstract into human scale
[49:13] conceptual understandings of the overall reasoning.
[49:16] Cindy, how might you program agency into an AI?
[49:20] Yeah, I mean, OK.
[49:21] Like the Terminator, for example.
[49:23] OK, so yeah.
[49:25] Just to pick an example.
[49:27] OK, so when Neil says agency, what he means is he wants
[49:35] to see the presence of some sort of persistent,
[49:40] self-motivated, goal-oriented behavior organized
[49:46] around the idea that this AI is actually
[49:50] an agent in its environment.
[49:52] So not just responding to your prompts,
[49:54] but also pursuing goals long term.
[49:57] And I know that we have a little bit of a disagreement,
[49:59] because I think you already believe
[50:01] that there is goal-oriented behavior in these machines.
[50:05] Which I don't see any presence of currently, right?
[50:09] Unless we program in a goal.
[50:11] You mean a goal that it sets for itself.
[50:13] Well, OK.
[50:13] No one's programming.
[50:14] We're growing these things.
[50:15] We're growing.
[50:15] OK, so yes.
[50:17] But like growing.
[50:18] That's scary when you say we're growing them.
[50:20] I mean, yeah.
[50:20] It's not a biological thing, actually, right?
[50:22] I mean.
[50:23] It's not like old school programs, either.
[50:25] But I don't think it's going to get agency by just training
[50:30] on larger data sets, right?
[50:32] I mean, to get agency, there has to be
[50:34] some real engineering things that are happening.
[50:38] You know, like it's not like a bigger neural network
[50:41] or something is ever going to be able to pursue goals,
[50:46] remember things, take actions.
[50:49] I think that to do that, you have
[50:51] to have kind of like outside scaffolding that are real.
[50:56] I mean, that are different things, right?
[50:58] It's not the same underlying.
[51:00] Wait, wait.
[51:00] Tanya, can you imagine an AI with agency that satisfies
[51:04] all of your concerns?
[51:05] That would be like a good AI, right?
[51:08] Well, first of all, before we can even
[51:11] get to the issue of agency, the issue
[51:15] of these emergent behaviors, and even if it is an AI
[51:18] with agency, this is wrapped in a business model.
[51:22] And the goal of the product is to make money for the company.
[51:28] And I'll go back to social media as an example of that.
[51:32] We're not the customer of Facebook.
[51:34] We're their product.
[51:36] Our eyes on a page is what makes the money.
[51:39] And the advertisers are their customer.
[51:42] And so you can't just talk about, oh, I'm
[51:45] building this AI and its emerging behavior,
[51:47] and it may even get agency.
[51:49] The question is, how is the business selling it?
[51:52] What are they making it available for?
[51:54] What are they saying it's to be used for?
[51:56] If it's to be used to help me hire people in my company,
[52:01] then it needs to show that it's not
[52:04] going to violate.
[52:04] Right?
[52:04] The Equal Employment Act.
[52:07] If it's being hired for other purposes,
[52:11] like for credit cards, it needs to show
[52:13] that it's not going to violate the Fair Credit Reporting Act,
[52:16] and so forth.
[52:17] And that's the level of responsibility
[52:20] that only the designer of these companies
[52:22] can do because they're the ones making the product.
[52:24] You make an important point about the business motives.
[52:28] Now, if I remember correctly, and this could just be,
[52:30] I'm imagining this, Eric, Google was so large
[52:32] and had so much cash flow.
[52:34] You had the luxury to just explore
[52:37] without any particular obligation
[52:39] to the bottom line of the quarterly report.
[52:42] Like AlphaGo, all right?
[52:45] You're going to be the best Go player in the world.
[52:48] And what's the one for chess that was?
[52:51] Stockfish?
[52:52] No, but the Google's one.
[52:54] AlphaZero.
[52:55] AlphaZero.
[52:56] To be the best chess player.
[52:58] Did that have real financial consequences?
[53:00] Or was it just bragging rights, and that was just a fun thing
[53:04] for your company to do?
[53:06] Well, it was research.
[53:07] And once we achieve those objectives,
[53:10] we actually then use that to solve a problem called protein
[53:14] folding, which is the single largest discovery in biology,
[53:18] I think, in the last 50 years.
[53:20] That was AlphaFold.
[53:21] Yes.
[53:21] And AlphaFold will result in an enormous number
[53:25] of new drugs, new treatments, new research in biomedicine,
[53:28] so forth and so on.
[53:30] Research has these sort of strange things
[53:32] where you try one thing.
[53:34] You make a discovery.
[53:34] You link them together.
[53:36] AlphaFold is a very good example of the unintended effects
[53:41] of people trying to play games.
[53:42] Your hyperbole there undersells the fact
[53:45] that AlphaFold was given the Nobel Prize in physiology
[53:50] for that discovery.
[53:52] So what you're saying is that research was,
[53:58] these were steps towards this greater goal that, in fact,
[54:02] was achieved.
[54:04] Well, I think that this narrative is
[54:06] ignoring the enormous reasons why people are pursuing AI.
[54:10] So let's start with health care.
[54:13] Let's start with medicine.
[54:14] Let's start with drug discovery.
[54:17] How about energy solutions?
[54:19] We're going to be cooked as a world unless we come up
[54:22] with better energy sources.
[54:24] One interesting thing, I'm working
[54:26] on data center architectures.
[54:28] And it now looks like, if you have enough land,
[54:30] the solar and battery backups are more cost effective
[54:35] on a present value basis.
[54:36] The natural gas combines cycle things even in the United States.
[54:41] So it shows you that innovation happens.
[54:43] And it can happen in the right way as opposed
[54:45] to just the wrong way, which everybody is just talking about.
[54:47] There's certainly secondary effects.
[54:49] And there's no question.
[54:50] But let me say, once again, this stuff
[54:52] has done by under US law with people paying
[54:56] an awful lot of attention.
[54:58] And if there's laws that are missing, let's pass them.
[55:00] OK, why not make AI solve all your problems?
[55:05] I'd like that.
[55:06] Interject one thing, as well.
[55:07] So I think we also talk about the profit motive that
[55:11] drives the creation of AI.
[55:13] But that's not what drives the researchers.
[55:16] There are genuinely good, thoughtful scientists
[55:19] It's just curiosity on the frontier.
[55:21] And then the luxury that someone is paying them to do it.
[55:23] And I think AlphaFold is a great example of this.
[55:27] That is AI.
[55:29] That is a scientific advance that's
[55:31] incredibly important to human health.
[55:33] And I think conflating the fact that we're
[55:35] seeing the fact that companies have
[55:37] profit with the motive for why this is being invented
[55:43] is incorrect.
[55:45] Can AI solve our AI problems?
[55:47] I mean, I just love this debate.
[55:49] Because I think Chris touched on something really important.
[55:53] We're talking about, where is this kind of noble following
[55:56] the path of science?
[55:57] We just saw meta sign checks for researchers
[56:01] for $100 million.
[56:03] That's their salary.
[56:04] So let's just be clear.
[56:05] We are getting basketball salaries now.
[56:09] Well beyond that.
[56:09] In fact, they're breaking records.
[56:11] These are not necessarily people who are doing it out
[56:13] of the good of their heart.
[56:15] This is an extremely moneyed field.
[56:18] And I think it was originally researchers
[56:20] who were really doing this just out of curiosity 10,
[56:23] 20 years ago.
[56:24] We're in a different world now.
[56:25] And I guess what I'd love to talk about is that world.
[56:28] I'd like us not to have the AI optimism or AI pessimism
[56:32] discussion, but the AI realism discussion.
[56:35] All right.
[56:35] So in that world, we're going to lose a lot of jobs.
[56:40] I mean, look.
[56:40] In that world, people are losing their jobs now.
[56:43] Exactly right.
[56:44] And we're talking about the most significant potential loss
[56:48] of jobs.
[56:48] And we're hearing about this from the people building
[56:51] these systems.
[56:52] You can look through all of the news stories.
[56:53] People who are running frontier AI companies
[56:56] are saying that we could be looking
[56:57] at the loss of 75% of new jobs.
[57:00] That's for new graduates.
[57:02] That is a horrifying number.
[57:04] And I think.
[57:04] LaTanya's point is so important here
[57:06] is that we do not have the governance infrastructure
[57:10] or the planning to deal with what
[57:12] could happen to the labor market.
[57:13] So we need to be more nimble in our legislation and laws
[57:16] to keep pace with this.
[57:17] Well, we also have to be realistic.
[57:19] Who is making the laws right now?
[57:20] I mean, Eric is talking about we have laws.
[57:26] Let's just be clear.
[57:28] We have never had an administration
[57:29] make less interest in regulating artificial intelligence.
[57:33] We've seen regulations.
[57:34] Regulations produced by previous administrations
[57:37] completely zeroed out.
[57:38] And we've seen the threat of preventing every US state
[57:42] from making its own AI regulation.
[57:44] And can I just add on to what Kate's saying?
[57:47] Many times, I'll say that we live in a technocracy,
[57:50] that our democracy has already been changed.
[57:52] The technology designers.
[57:53] Tech grows are in charge.
[57:55] The technology designers are the new policymakers
[57:58] by the arbitrary decisions they make in the products
[58:01] that they produce.
[58:03] And we have no way.
[58:04] No way of producing oversight.
[58:06] By the time I'm a computer scientist,
[58:08] my whole life has been about how to help society get
[58:11] the benefits of these technologies more and more
[58:14] alpha-folds, but without these crazy harms
[58:17] and undoing our society.
[58:19] And it doesn't have to be this way.
[58:21] What's forcing it this way is that the technocracies,
[58:26] the technocrats who are in charge,
[58:28] are ignoring that responsibility.
[58:30] OK, just if I can.
[58:35] There is something called the lesser
[58:37] of evils, because before the tech bros had the influence
[58:41] that they currently did, you had Congress deciding on technology.
[58:48] That was disastrous, because they didn't know,
[58:52] what button do I push?
[58:54] And they're making laws and legislations to constrain that.
[58:58] So I don't have a solution.
[59:01] I'm just saying, if tech bros are influencing the laws,
[59:05] how much worse is that?
[59:07] Then?
[59:08] Tech ignorant people, who are otherwise running Congress,
[59:11] making the laws.
[59:12] So again, right now, us as researchers
[59:16] who are trying to provide experiments
[59:18] to these companies, government agencies
[59:22] who have the legal right to oversee these companies
[59:29] can't really do this job.
[59:31] The person sitting in the seat to do the job
[59:33] is the company itself, because they
[59:35] have access to the technology.
[59:37] OK.
[59:38] They can run the same experiment.
[59:40] I have undergraduates, a team of undergraduates,
[59:42] who don't even get paid.
[59:44] And we've got to show Google and other places what's
[59:47] wrong with their technology.
[59:49] It's not like they're paying us to do that.
[59:51] You're telling me they can't run these same experiments?
[59:55] All right.
[59:56] Nate, do you have any sense of what happens in this future
[1:00:00] that Kate sees, where jobs are replaced,
[1:00:06] and now you're making a product, but no one has money to buy it,
[1:00:09] because nobody has a job, because you replaced their job
[1:00:11] with AI?
[1:00:13] Did that make sense, what I just said?
[1:00:15] Yeah, OK.
[1:00:16] You know, I expect 100% employment,
[1:00:23] but I do also expect it to take the jobs.
[1:00:25] And you'll put it to death.
[1:00:27] Wait, wait, wait.
[1:00:27] Just to be clear, other previous advances in technology
[1:00:32] displaced workers, no doubt about it.
[1:00:35] But there was expansion in other sectors.
[1:00:37] You know, buggy whip manufacturers
[1:00:40] went out of business, but auto repair centers came in.
[1:00:43] And just after the turn of the century.
[1:00:46] Here, this pace seems faster than what society's capacity
[1:00:52] to respond might be.
[1:00:54] And so shouldn't we be worrying about an entire class
[1:00:58] of unemployed people who cannot buy the services provided
[1:01:02] by the AI that displaced them?
[1:01:04] You know, I think probably some people
[1:01:08] are going to want to worry about that.
[1:01:10] Probably society cannot.
[1:01:11] Especially the unemployed people, yes.
[1:01:13] Especially the unemployed people.
[1:01:14] So you know, economists will tell you
[1:01:16] about how, if you manage your economy correctly,
[1:01:18] and if things are going slow enough that people can respond
[1:01:21] and find new jobs, society can absorb a lot of job loss.
[1:01:25] You know, the world used to be something like 95% farmers.
[1:01:29] And now, you know, we can get the world running
[1:01:32] on 2% of the population being farmers.
[1:01:34] And that does not mean 93% of people are unemployed.
[1:01:37] AI is different.
[1:01:38] Will we or won't we?
[1:01:40] That took a century, by the way.
[1:01:41] But go on.
[1:01:41] It took a while.
[1:01:42] Yeah.
[1:01:43] It took a century.
[1:01:44] Will humanity have adopted this one?
[1:01:47] Frankly, I'm like, man, I'm sort of looking
[1:01:52] at a different problem here.
[1:01:54] I'm just saying, Eric is talking about a three-month timescale
[1:01:59] for the advances on these things.
[1:02:01] He's also talking about this recursive self-improvement,
[1:02:03] about the AIs that can make smarter AIs that
[1:02:06] can make smarter AIs, you know?
[1:02:07] And so my role in this is to say, hey, you know,
[1:02:11] I'm not saying those are fake problems.
[1:02:13] Those are real problems.
[1:02:14] Some of those problems are starting to happen now.
[1:02:15] But?
[1:02:16] You know, kids in the schools, autonomous weapons,
[1:02:19] real conversations we should be having.
[1:02:20] But?
[1:02:21] But we're also racing towards super intelligent AIs
[1:02:25] that we don't have any idea how to point towards good stuff.
[1:02:29] A lot of this like, oh, well, you know, they make some mistakes.
[1:02:32] We can't pre-test it.
[1:02:34] It goes off the rails.
[1:02:35] That's fine when it's dumber.
[1:02:36] We're rushing towards making it smarter.
[1:02:38] That's going to just kill us.
[1:02:40] And we all have common cause.
[1:02:42] And stopping that, the people who are really pro-automation
[1:02:45] and are like, oh, we're going to have a universal basic income.
[1:02:48] Don't worry about the jobs.
[1:02:49] And the people who are like, that's ridiculous.
[1:02:51] Taking people's jobs removes their ability
[1:02:53] to have any leverage on society.
[1:02:55] Both those groups of people don't want
[1:02:58] to die to a rogue super intelligence.
[1:03:00] And so my role in this is to say, can we all
[1:03:03] join hands on the part where we don't die
[1:03:05] to a rogue super intelligence?
[1:03:08] OK, I think we have good agreement on that one.
[1:03:11] Kate, what?
[1:03:11] You were thinking?
[1:03:12] What were you thinking, Kate?
[1:03:13] I mean, I think part of the problem here
[1:03:15] is how we frame this discussion, particularly around jobs
[1:03:18] and automation.
[1:03:19] In previous technologies, the weaver
[1:03:23] was replaced by the loom, or we had the horse and carriage
[1:03:26] replaced by the car.
[1:03:27] The deal was that these more menial human tasks
[1:03:31] would be removed, but they would give us
[1:03:33] more time for these intellectual and creative challenges
[1:03:37] that people would be doing more interesting work over time.
[1:03:40] This is a different revolution.
[1:03:41] Why?
[1:03:42] And in many ways, because it is coming
[1:03:44] for creative and cognitive labor.
[1:03:47] It is coming for the thing that, in theory, automation
[1:03:50] was supposed to be freeing time for us to do.
[1:03:52] And what we're starting to see is it is, in fact, of course,
[1:03:55] white-collar labor.
[1:03:56] It's the jobs that so many of us would be like,
[1:03:58] that's what you want to do.
[1:03:59] That's what you train to go and become a lawyer,
[1:04:01] or you go and study to be a computer scientist.
[1:04:04] Who are some of the first people being laid off in this wave?
[1:04:06] Computer scientists.
[1:04:08] Excuse me?
[1:04:08] Because what can we automate?
[1:04:09] Programming, right?
[1:04:10] She's got a job.
[1:04:10] Don't point to her.
[1:04:11] Yeah.
[1:04:12] She'll be fine.
[1:04:12] She's fully employed here.
[1:04:14] But this is a fundamentally different question.
[1:04:17] When cognition and creativity is being automated,
[1:04:20] we can talk about how effectively there's
[1:04:22] clearly a lot of problems there.
[1:04:23] But that is the shift that I think
[1:04:25] we have to be very serious in talking about.
[1:04:27] I think it's easy to make historical analogies
[1:04:30] without looking at the fundamental nuance
[1:04:32] and difference of what is happening here.
[1:04:34] And it really is something that the people in this room
[1:04:37] are going to have to deal with.
[1:04:38] This is not 100 years time.
[1:04:39] Chris.
[1:04:40] This is now.
[1:04:40] I'd also like to give
[1:04:41] a call to action for people in this room.
[1:04:43] I agree that this is a much more significant problem
[1:04:46] than the idea that a terminator is going
[1:04:48] to emerge and kill us all.
[1:04:49] No, it's not.
[1:04:50] That's a lovely.
[1:04:51] Just kidding.
[1:04:52] That's a speculative fiction that's
[1:04:54] great and worth thinking ahead to the future,
[1:04:57] but it's speculative fiction.
[1:04:59] The job loss is something that we all play a role in.
[1:05:03] So I think that there should be moral clarity here.
[1:05:07] If you are a business owner, if you are a business leader,
[1:05:10] if you have influence about hiring
[1:05:11] and your company, then in the not too distant future,
[1:05:14] you'll be faced with a question, which
[1:05:16] is AI is suddenly making everyone twice as productive.
[1:05:20] So the question for you will be.
[1:05:22] Making half of the many people twice as productive.
[1:05:23] The question for you will be, do I
[1:05:25] want my company to be more productive with the same staff,
[1:05:28] or do I want to be the same level with half the staff?
[1:05:32] And that is a question that we as a society
[1:05:35] need to make a decision about.
[1:05:36] And I strongly encourage you, in any position
[1:05:40] where you have influence over this,
[1:05:41] to go to the path of growth, to try
[1:05:43] to expand what your company can do with the same staff.
[1:05:47] You don't need to replace people.
[1:05:48] You need to figure out what additional tasks they can
[1:05:51] serve.
[1:05:52] So Kate, about this replacement of cognitive and artistic
[1:06:00] talents, let me take a middle road there, middle position
[1:06:04] road.
[1:06:07] There were teams of artists hired
[1:06:10] to reproduce some idea that was set down
[1:06:14] by a set designer, or by some lead artist in a project.
[1:06:20] Is that much different from the schools of art
[1:06:25] that we know about from the Renaissance,
[1:06:27] where there's the main artist, who is the great artist,
[1:06:30] and then everyone comes in and apes that style?
[1:06:35] So my sense of this is, I could be wrong,
[1:06:38] we have expertise here to correct me, that AI today
[1:06:42] is really good.
[1:06:43] I have the capability to do a lot more than I do
[1:06:48] at copying what's already on the internet.
[1:06:51] I can tell Chat GPT, or Claude, I say, you see the scene here?
[1:06:56] Paint it the way Van Gogh might have seen it.
[1:07:00] And it will come back with the same color palette,
[1:07:02] brush strokes, and we'll be there
[1:07:04] in these swirly patterns.
[1:07:06] And we say, that's what Van Gogh would have painted.
[1:07:08] Now if I then ask it, paint the scene
[1:07:12] in the style of no artist who has ever lived, what's it going
[1:07:15] me, I think AI in that space is boosting the expectations we should have of ourselves of
[1:07:24] our true creativity. Because true creativity is not doing what someone else had done just
[1:07:29] a little better, it's doing what no one else has done. And I don't know if AI has that
[1:07:34] ability if it's being trained on what everybody else has done.
[1:07:40] I could hop in on that a little bit. Training on, you know, we've also heard it's just math,
[1:07:47] it's just text prediction. Training on text that humans produced does not require, like
[1:07:56] that is a harder problem to predict that text than producing the text in the first place.
[1:08:01] An example that might resonate with people here is there was a great astronomer, Tycho
[1:08:05] Brahe, who recorded the position of all the stars, and particularly Mars, each night,
[1:08:11] playing.
[1:08:11] Planets.
[1:08:12] Yeah, sorry.
[1:08:13] Yeah.
[1:08:13] All the stars and also Mars.
[1:08:15] You're standing next to the director of the Hayden Planetarium, and you got that wrong,
[1:08:19] okay?
[1:08:19] He also recorded the stars. He also recorded the stars.
[1:08:24] Okay.
[1:08:25] But Mars is the one that Johannes Kepler tracked.
[1:08:28] Yes.
[1:08:28] Right? So Johannes Kepler, there's a big heist, he stole the journals, the data.
[1:08:33] But Tycho Brahe, when he was recording the stars and planets, he just looked at the night
[1:08:41] sky.
[1:08:41] And wrote down where Mars was.
[1:08:45] If you imagine training something to predict the entries in Brahe's journal, it does not
[1:08:50] have the luxury of looking at the sky.
[1:08:53] You can solve this problem.
[1:08:56] You can read a bunch of the entries in Brahe's journal and predict the next.
[1:09:00] To do so, you have to discover the laws of the motions of the planets.
[1:09:05] Johannes Kepler did this.
[1:09:06] Brahe never did.
[1:09:08] So an AI trained on only Brahe's journals, trained on only the prediction, only predicting the
[1:09:13] texts that Brahe produced.
[1:09:16] That training target incentivizes the AI to figure out something that Brahe never did.
[1:09:22] Now, can AIs today do this?
[1:09:24] That's one question.
[1:09:25] But training an AI to predict things humans wrote down, does not limit it to doing only
[1:09:31] things that humans can do.
[1:09:33] And we are starting to see AIs, you know, we've seen chat GPT, or GPT 5.1, produce some
[1:09:39] novel physics contributions.
[1:09:41] We're seeing them start to inch into ways that humans...
[1:09:44] Haven't quite done things.
[1:09:47] And, you know, sometimes, you know, when a new scientist comes up with a new idea, they
[1:09:53] aren't necessarily coming up with a wholesale, entirely new way of thinking, as opposed to
[1:09:58] making small amounts of thinking progress and using a lot of other thinking progress that
[1:10:02] came from other people.
[1:10:03] You know, they're standing on the shoulders of giants.
[1:10:06] So the things we're doing with AI can get them there.
[1:10:09] There's a different question of whether we're currently there, but it can get them there.
[1:10:13] Okay.
[1:10:13] And this idea that...
[1:10:13] Okay.
[1:10:14] Okay.
[1:10:14] And this idea that they're just math, and so it won't get there, that's like saying,
[1:10:18] oh, you know, a house cat is just biochemistry, so no sort of cat is dangerous.
[1:10:23] You know, a house cat is just biochemistry, and a house cat can't kill you, but that doesn't
[1:10:27] mean that tigers are fake.
[1:10:33] Wow.
[1:10:36] Was that a logical leap from house cats to tigers?
[1:10:39] I was sharting it in.
[1:10:40] You looked like you wanted to move along.
[1:10:42] That's right.
[1:10:42] No, we're running short on time here.
[1:10:45] I want to get back to...
[1:10:48] A very serious point.
[1:10:51] I was appointed under Obama to serve on the Defense Innovation Board, a freshly created
[1:10:58] board of the Pentagon that was tasked with advising the Pentagon on all the ways technology
[1:11:05] might influence the future, the present and future warfare, not just how many missiles
[1:11:10] are in your silo or how many push-ups the soldiers can do.
[1:11:14] Are there other innovations that are possible?
[1:11:16] Eric Schmidt was the chair.
[1:11:18] He was the chair of that board.
[1:11:19] I was privileged to serve under him for this.
[1:11:22] We toured military bases around the world, spoke with soldiers, very dedicated to their...
[1:11:29] Every single one of them wanted his autograph.
[1:11:33] I'm serious.
[1:11:34] We would show up, and it's like all these young men, it's like, oh, my God.
[1:11:39] So, Eric, right about the time, drones were becoming sort of more lethal and more real,
[1:11:46] and we felt...
[1:11:48] We felt it necessary as a committee, as a board, to draft some kind of rules of engagement
[1:11:55] regarding how and when AI would be brought into the chain of decisions that would lead
[1:12:01] to lethality, and that was a code of ethics, really, and I was quite proud of that because
[1:12:09] there was no telling what would happen without such guidance.
[1:12:12] My question to you is, our motivation at the time was the knowledge that AI...
[1:12:17] Knowledge that AI was not perfect, that AI would make mistakes, and you never want someone
[1:12:22] to die because of a mistake, yet, of course, people make mistakes, so at what point is
[1:12:28] AI good enough in that space to say, I trust AI over a person and let AI run with the decision
[1:12:37] and not a human?
[1:12:38] Is it just that at the end, you want to have someone to blame?
[1:12:41] Because you can't blame the computer...
[1:12:42] You can't put the computer on trial, but a person, you can.
[1:12:47] So, what happens when AI can find the enemy perfectly?
[1:12:52] So, everybody here has an opportunity to read the report that we issued under DIB.
[1:12:57] DIB, Defense Innovation Board.
[1:13:00] And the Pentagon actually adopted it and published it as their guidelines, and I'll roughly summarize
[1:13:05] what our thinking was at the time.
[1:13:10] All war is terrible, all military activity is terrible, it involves the destruction of
[1:13:14] other people and so forth and so on, it's just horrible, trust me.
[1:13:18] And the question is, if you have to do it, can you do it using AI to be more accurate
[1:13:25] and more successful?
[1:13:26] That was the question.
[1:13:28] And we debated this at some length, and we concluded that, at least at the time, and
[1:13:32] I think it's true today, AI is not reliable enough to rely on it to make those decisions,
[1:13:39] and it's unlikely to be so for a very long time.
[1:13:42] That was our conclusion.
[1:13:43] It sort of makes sense, right, because you can't tolerate sort of a percentage of errors
[1:13:47] where...
[1:13:48] You might be able to tolerate them in other areas.
[1:13:51] And so what we wrote was that any use of automation in that area, essentially called lethality,
[1:13:59] needed to have a couple of things.
[1:14:01] It needed to have been tested, it needed to have a time and a space duration, and most
[1:14:06] important, there had to be a human who was overseeing it.
[1:14:10] They had to know what was gonna happen ahead of time, they had to have tested that it was
[1:14:14] gonna happen, and then it had to reproduce, and if it didn't, then that was a major problem.
[1:14:17] And that's ultimately what the Pentagon agreed to.
[1:14:21] This is a compromise, because these tools are very powerful, but they also make mistakes.
[1:14:26] So that's ultimately, I think, where the Pentagon ended up.
[1:14:28] And now that was some years ago now, we're at least a decade forward from that.
[1:14:34] Is AI good enough so that you would have scripted this differently today?
[1:14:40] I don't believe you and I and the others would have come to any other conclusion on the lethality
[1:14:44] part.
[1:14:45] The lethality part is obviously the most important part.
[1:14:46] It's the most important part.
[1:14:47] It's the most important one.
[1:14:48] It's the most important decision that's made by members of the military under military
[1:14:51] law and laws of war and so forth and so on.
[1:14:55] What is true is that the AI planning and recognition, that is the eyesight and the ability to do
[1:15:01] planning, is now far superior than what humans.
[1:15:05] That's not the same thing as lethality.
[1:15:07] Okay, so these are subtle differences in the capacity of AI that are not always recognized
[1:15:14] or appreciated.
[1:15:15] Well, I think we all agree that these systems...
[1:15:19] These systems are not perfect, and at least in things which are life critical, like deciding
[1:15:24] what to do with a patient, right?
[1:15:27] Deciding when an airplane is in trouble, anything involving military kinetic conflict.
[1:15:33] Think twice before you empower AI because it makes mistakes.
[1:15:37] I want to thank you for that, Eric, and thank you for your service in that.
[1:15:46] Just to land this plane, what I want to do is go down the panel and ask, you know, given
[1:15:51] this three-month horizon for new stuff that happens in AI and that whole sector, let's
[1:15:59] just fast-forward way into the future to 2030 and ask, wait, wait, wait, how long ago was
[1:16:09] ChatGPT?
[1:16:10] How many years ago?
[1:16:12] Three and a half years ago was ChatGPT, one, or whatever, the first public version of it.
[1:16:19] So let's fast-forward three and a half years to 2030.
[1:16:23] What is the state of the world and its relationship with AI?
[1:16:29] Let's start with you, LaTanya.
[1:16:31] Yeah.
[1:16:32] Well, I can tell you the vision that we would like to see, and we, in a way...
[1:16:36] Sure.
[1:16:37] And that you're not just passively hoping it'll get there, you're actively working to
[1:16:41] achieve this objective.
[1:16:42] Right.
[1:16:43] And it's a future that is really amazing in the sense that everyone is far more productive.
[1:16:51] I can be in multiple places at one time.
[1:16:53] There are agents that know my preferences and my desires, and they represent me in various
[1:17:00] contexts.
[1:17:01] Just to be clear, you guys use the word agent to refer to an AI representative of your needs
[1:17:08] and activities.
[1:17:09] That's correct.
[1:17:10] Okay.
[1:17:11] Go ahead.
[1:17:12] And so they might negotiate a purchase arrangement for me or be in a meeting and
[1:17:15] represent my interests.
[1:17:17] And so my productivity is just orders of magnitude greater than it is today.
[1:17:23] And you're fine with that?
[1:17:25] I'm fine with that, provided that every single one of my uses of AI in making me more productive
[1:17:32] is actually able to show that it gives me a warranty of what it will do and how it will
[1:17:40] represent itself.
[1:17:41] In other words, the same thing I get when I get an appliance, I get a warranty and a
[1:17:45] compliance statement.
[1:17:46] Got it.
[1:17:47] I want that same thing on these agents, and I'll be able to do that, in some ways, what
[1:17:52] you guys were offering to the military, there are also many societal harms that aren't about
[1:17:57] immediate life-threatening, but are about our ability to have a democracy, about our
[1:18:03] ability to understand truthful information and so forth that are at stake and need those
[1:18:08] same kind of guidance.
[1:18:09] So it'd be like an underwriter's laboratory certification that we have for our actual
[1:18:14] appliances but applied to your AI agents.
[1:18:17] Okay.
[1:18:18] 2030.
[1:18:19] 2030.
[1:18:20] I think we have...
[1:18:21] Yeah.
[1:18:22] I think we have two paths in front of us.
[1:18:25] On one path, we move towards a place where we see what is already one of the most concentrated
[1:18:32] industries in the industrial history of the planet become even more concentrated and even
[1:18:38] fewer people have their hands on the levers of power, and democracies start to erode,
[1:18:44] and these systems ultimately are making decisions that will decide what kind of job you can
[1:18:48] have, what kind of quality of care you get.
[1:18:51] What kind of options you and your family will have going forward.
[1:18:54] The other path is that we...
[1:18:56] Just remind me.
[1:18:57] Is it...
[1:18:58] Yeah.
[1:18:59] I'll get these numbers wrong, but the sense is accurate.
[1:19:00] There are four or five companies on the S&P 500 that represent 30% of the total wealth.
[1:19:08] Of the United States.
[1:19:09] Correct.
[1:19:10] Some number like that.
[1:19:11] The numbers might be precise, but is the census correct?
[1:19:14] You're exactly right.
[1:19:15] Okay.
[1:19:16] And if you look at the numbers of people who are actually running these companies, you're
[1:19:19] talking about a handful.
[1:19:20] A tiny handful.
[1:19:21] A handful of billionaires.
[1:19:23] They are the people who are getting to make these decisions about how these tools will
[1:19:27] be used and what problems they will be used to solve.
[1:19:30] The other path is that we realize this is the most important public conversation that
[1:19:35] we can be having right now, which is why I'm so glad we're gathered here today having this
[1:19:39] discussion, and we decide that actually this is too important to be left in such a small
[1:19:46] number of hands of people who are not democratically accountable.
[1:19:49] You asked what the difference was between a...
[1:19:50] I mean, a representative and a tech bro is that we vote for one of them, and we don't
[1:19:56] vote for the other ones.
[1:19:58] And that matters.
[1:20:00] Okay.
[1:20:01] So take me to 2030.
[1:20:02] Take me to 2030.
[1:20:03] So 2030.
[1:20:04] What I would love to see by 2030 is that we say, this matters.
[1:20:06] The public interest of these technologies is at the core.
[1:20:10] And if it is going to be using our land, our energy, and our water, then we need to have
[1:20:14] a say in it.
[1:20:15] And these tools should be used for the public good, and that actually means making this
[1:20:19] something that serves all of us, not the few.
[1:20:23] Are you running for office?
[1:20:26] Chris, take me to 2030.
[1:20:31] What are you teaching in 2030 you're not teaching today?
[1:20:34] And by the way, Isaac Asimov had his three laws of robotics.
[1:20:38] Do they apply today?
[1:20:41] Is there an AI laws of robotics?
[1:20:43] There is an AI laws of robotics.
[1:20:45] So Asimov's laws of robotics for the non-nerds in the audience, and I say that you are all
[1:20:49] nerds, thank you for being here with us, are firstly, that a robot must do no harm
[1:20:55] to an individual, including through inaction, secondly, that a robot must obey what the
[1:21:01] human says, and thirdly, that the robot must not cause harm to itself, barring the other
[1:21:06] two.
[1:21:07] Of course, there was a zeroth law-
[1:21:08] So barring, you mean, it can't cause harm to its-
[1:21:14] The first two take precedent.
[1:21:16] Yes.
[1:21:17] What it had to accomplish, the third.
[1:21:19] It cannot undo the first or the second.
[1:21:22] Exactly.
[1:21:23] And then there was a later zeroth law, which is the robots cannot harm humanity, which
[1:21:28] is the broader thing rather than just an individual.
[1:21:31] So this is a key challenge for AI.
[1:21:34] It's the alignment problem.
[1:21:36] I have to say, so Isaac Asimov wrote a series of short stories that were collected into
[1:21:43] a book called I, Robot, which then became a film starring-
[1:21:47] Will Smith.
[1:21:48] Will Smith.
[1:21:49] Will Smith.
[1:21:50] And in there, he explored these rules of engagement for robots very early, so the early 1950s.
[1:21:57] So we're talking about 70 years ago, plus 75 years ago.
[1:22:01] Yeah.
[1:22:02] So continue, please.
[1:22:03] Again, this is the place where speculative fiction thinks about the future in a very
[1:22:05] useful way.
[1:22:06] So AI alignment is exactly these laws.
[1:22:10] So how do we think about making a systems that helpful?
[1:22:13] How do we think about making a system that's safe?
[1:22:16] How do we ensure that it's for the benefit of humanity rather than to harm?
[1:22:17] Yeah.
[1:22:18] How do we ensure that it's for the benefit of humanity rather than to its detriment?
[1:22:19] So as we go into 2030, I think that those key elements that are similar to Asimov's laws
[1:22:28] become the key things that guide the success or failure of the AI endeavor.
[1:22:32] And that could happen within the next three and a half years?
[1:22:35] It's happening now.
[1:22:36] There are many, many thoughtful researchers working on these.
[1:22:40] You can check out the talk by Amanda Askell, the chief philosopher for Anthropic, about
[1:22:46] how she's crafted claustrophobia.
[1:22:47] I think I've read Claude's soul document, the thing that governs its behavior and its
[1:22:52] code of ethics.
[1:22:53] It's incredibly thoughtful.
[1:22:55] And it gives me hope for the future.
[1:22:57] Did Google have a philosopher?
[1:22:59] Larry and Sergey.
[1:23:01] Okay.
[1:23:02] Oh, thank you for that.
[1:23:06] So Cindy, what you got going for us?
[1:23:08] 2030.
[1:23:09] 2030.
[1:23:10] A few years from now.
[1:23:11] I think that we'll see, you know, a lot of these technologies we have now, the kind of
[1:23:16] LLMs.
[1:23:17] will become more useful in our day-to-day life.
[1:23:21] Right now, it can kind of respond to questions
[1:23:25] or to prompts we give it,
[1:23:27] and maybe it'll become a little bit closer
[1:23:29] to be able to, responding to queries
[1:23:33] more like achieve this objective.
[1:23:34] So I can imagine specialized versions of this
[1:23:38] like help you respond to emails and these sorts of things.
[1:23:43] Feet on the ground, I think we maybe will see
[1:23:46] self-driving cars in New York City in a few years, you know?
[1:23:50] I think that a lot of the AI will be trained
[1:23:55] on more specialized data sets,
[1:23:57] so they'll become better at certain tasks
[1:24:01] like scientific research or mathematical research.
[1:24:06] If you train on kind of certain parts of the internet,
[1:24:11] it may be better than what it is now.
[1:24:12] Quickly, Kate, if AI got that specialized,
[1:24:15] wouldn't that be more,
[1:24:16] more efficient on the grid?
[1:24:18] It could be.
[1:24:20] This is one of the really big questions.
[1:24:22] I mean, right now, large-scale models are being trained
[1:24:24] using a lot of fossil fuels, a lot.
[1:24:27] In fact, more than previously.
[1:24:28] So if we had smaller models, if they were specialized,
[1:24:32] there are ways forward where you can use much less energy.
[1:24:34] That's what I would have figured.
[1:24:36] And for example, in the Jetsons,
[1:24:38] the documentary, the Jetsons of the future,
[1:24:42] that they, as fun and creative as they were,
[1:24:45] they did not realize
[1:24:46] that maybe the car could one day drive itself.
[1:24:50] The human drove it, or you get a robot to drive it,
[1:24:52] but the car is the robot.
[1:24:54] And that's a very specific invocation of this.
[1:24:57] So based on the title of your book,
[1:25:00] I'm going to show people your book,
[1:25:03] because, well, I got Kate's book here too.
[1:25:07] Kate, Atlas of AI, the pub date 2021,
[1:25:13] but it's still happening.
[1:25:15] Apparently so.
[1:25:16] They just went to the 10th edition, so it's doing all right.
[1:25:18] Yeah, okay, good.
[1:25:20] If anyone builds it, everyone dies.
[1:25:23] This has got to be the most depressing title ever.
[1:25:27] It starts with if.
[1:25:28] Ever.
[1:25:31] Oh, that makes it all better, okay.
[1:25:35] So take me to 2030.
[1:25:36] What's going on?
[1:25:37] Are we all dead?
[1:25:39] You know, if you step back from the AI argument,
[1:25:46] and you just look broadly at what's going on,
[1:25:49] the machines are talking now.
[1:25:53] People used to think it was going to take a while
[1:25:55] to get to the point where the machines are talking.
[1:25:57] And we got these people being like,
[1:25:58] oh yeah, we're just growing these things.
[1:25:59] Turns out they talk.
[1:26:00] They're solving problems we said
[1:26:01] weren't going to happen for 50 years.
[1:26:03] Now we're trying to take it to super intelligence.
[1:26:06] That's kind of crazy.
[1:26:08] It's kind of crazy.
[1:26:09] You know, you're like,
[1:26:09] do you guys know what's going on in there?
[1:26:10] They're like, no, but we think it'll be fine.
[1:26:12] Trust us, right?
[1:26:14] In Silicon Valley.
[1:26:16] We're not trusting you, just so you know.
[1:26:18] Great. Okay.
[1:26:21] In Silicon Valley, people are spooked.
[1:26:24] You know, there's a joke in Silicon Valley
[1:26:26] that when someone leaves a normal tech job,
[1:26:29] they say, you know, I've had a great time here,
[1:26:31] and now it's on to the next adventure.
[1:26:32] And when someone leaves an AI tech job,
[1:26:35] they say, I have stared into the abyss.
[1:26:39] I'm retiring to write poetry.
[1:26:41] Please spend time with your families.
[1:26:46] Okay.
[1:26:47] And if you think I'm joking,
[1:26:48] you can look at the resignation
[1:26:50] of the anthropic safety lead a few weeks ago,
[1:26:52] and see how close that joke is.
[1:26:55] But the thing is,
[1:26:56] in Washington, D.C., they are not spooked.
[1:26:59] In Washington, D.C., they are thinking
[1:27:01] that this is all about self-driving cars,
[1:27:03] and autonomous weapons, and job loss,
[1:27:06] and those are important issues.
[1:27:07] But what I hope we see in 2030 is that D.C. is spooked.
[1:27:14] I hope we see a world where D.C. has realized
[1:27:16] that the people running the labs,
[1:27:18] the academics outside the labs like Jeffrey Hinton,
[1:27:22] that the people inside the labs,
[1:27:23] that everyone who's close to this technology is saying,
[1:27:27] like, oh my gosh,
[1:27:27] the super-intelligent stuff could be really dangerous.
[1:27:30] Even Eric was saying, you know, there's red lines,
[1:27:31] and if we start to get close to these things,
[1:27:33] we need to watch out.
[1:27:34] Even Eric said that?
[1:27:36] Even Eric.
[1:27:38] Just briefly, you mentioned Dr. Hinton.
[1:27:42] He's one of the fathers of neural nets.
[1:27:47] Deep learning.
[1:27:48] Deep learning.
[1:27:49] And he was given the Nobel Prize,
[1:27:51] was it in chemistry, I think?
[1:27:53] Yeah.
[1:27:53] I thought physics.
[1:27:54] Oh, physics, thank you.
[1:27:55] The Nobel Prize in physics, just a few years ago.
[1:27:57] A few years ago.
[1:27:58] And we were fortunate enough
[1:27:59] to get him onto my podcast for an interview.
[1:28:02] It's online now with Jeffrey Hinton,
[1:28:05] talking about the original thinkings of AI
[1:28:09] and his reflections on where it is now and where it might be.
[1:28:12] Those three people did their work at Google.
[1:28:16] So there.
[1:28:17] So take me to 2030.
[1:28:19] Is this long?
[1:28:20] Is D.C. AI sensible by 2030?
[1:28:25] Because we're in the middle
[1:28:26] of whoever the next president will be.
[1:28:28] Yeah.
[1:28:29] So I think.
[1:28:30] Presuming it won't be Trump in a third term.
[1:28:32] Just have to, that's an if statement.
[1:28:35] Okay.
[1:28:35] I think, I think they could be.
[1:28:38] You know, we see a lot of people in the industry
[1:28:40] saying that this is horribly riskier
[1:28:42] than any other technology we work with.
[1:28:44] And we see people at these labs saying,
[1:28:47] you know, I think there's a good chance
[1:28:48] this kills everybody, but I gotta do it
[1:28:49] because I'm better than the next guy.
[1:28:51] And you know, maybe one of them's right.
[1:28:52] Probably two of them aren't.
[1:28:54] But in 2030, if we keep having this conversation,
[1:28:58] if it becomes more salient,
[1:29:01] I think we see people around the world realizing
[1:29:04] that we shouldn't do this super intelligence stuff.
[1:29:07] And the world does not need to look
[1:29:08] that different from today.
[1:29:10] You don't need to stop the self-driving cars.
[1:29:12] You don't need to stop AlphaFold doing the drug discovery.
[1:29:15] You don't even need to stop the autonomous weapons.
[1:29:17] Society can have a debate about, you know,
[1:29:19] do we want to save people on the front lines?
[1:29:21] Do we want to, you know, not do this,
[1:29:24] not go down this path, but we could stop just the race
[1:29:28] to build smarter and smarter AIs that nobody understands
[1:29:32] that would be easier than nuclear arms control.
[1:29:34] And I hope that in 2030, leaders around the world
[1:29:38] will realize that this is lethally dangerous
[1:29:41] and we'll be putting a treaty in place
[1:29:43] to say we're not rushing to smarter than human AIs
[1:29:47] with anything remotely like the modern technology
[1:29:49] where we have no idea what's going on in there.
[1:29:52] All right.
[1:29:53] Eric.
[1:29:54] You asked a question about 2030.
[1:29:56] Yes, I did.
[1:29:58] Why don't we bet?
[1:29:59] Take me there.
[1:30:00] Why don't we bet that American democracy will survive?
[1:30:05] OK, so that's an important.
[1:30:07] So I'm going to make a bet that American democracy will actually
[1:30:12] survive, we'll get through whatever the current problems
[1:30:14] are, of which there have been many listed here,
[1:30:16] that the excesses will be addressed through legislation.
[1:30:19] I know that's hard to imagine right now, right?
[1:30:21] But I'm going to bet that democracy will ultimately
[1:30:24] reflect the will of the voters, which
[1:30:26] is how it's supposed to work.
[1:30:28] So in that scenario, which I know
[1:30:30] is difficult to imagine at the moment,
[1:30:34] let's think about some of the other benefits of all of this.
[1:30:37] The likelihood of students being much better educated
[1:30:41] because of their ability to be tutored using AI
[1:30:44] in a specialized way is enormous.
[1:30:46] The ability to have the next generation, your kids
[1:30:49] and grandchildren, being so much smarter than we will ever be
[1:30:53] is pretty high.
[1:30:54] Furthermore, this issue about jobs,
[1:30:56] there's no question there's going to be job dislocation.
[1:30:59] But.
[1:31:00] Last time I checked, every one of those predictions
[1:31:02] has been false for the reasons Neil mentioned earlier, which
[1:31:05] is the American economy is fairly efficient at recycling jobs.
[1:31:09] And there's definitely harm.
[1:31:11] There's definitely people hurt.
[1:31:12] But at the end of the day, people working with computers
[1:31:15] today make more money than people without.
[1:31:18] Furthermore, let's bet that these other issues,
[1:31:22] the regulatory issues, the tech pro issues,
[1:31:24] are resolved through some combination of self-interest,
[1:31:27] shareholder values, legal liability,
[1:31:30] and regulation.
[1:31:32] In that model, in 2030, every one of you
[1:31:36] will have an assistant, a savant,
[1:31:39] that allows you, under your control, not spying on you,
[1:31:43] not doing something bad, under your control
[1:31:45] to do the things that you most care about.
[1:31:47] You're a physicist, right?
[1:31:49] Neil's a physicist.
[1:31:49] You have a physician's assistant who, overnight,
[1:31:52] does your thinking while you're busy sleeping
[1:31:54] and tells you what is interesting in the day.
[1:31:56] And you love it.
[1:31:57] And you can turn it off if you don't like it.
[1:31:59] So example after example, the most likely scenario by 2030
[1:32:03] is something analogous to what I've described.
[1:32:05] All of the things that everybody's worried about
[1:32:07] are true at some level.
[1:32:10] Or they're point issues that have to be debated.
[1:32:13] In my book, which you did not feature,
[1:32:17] I say very, very clearly that these two scenes are genesis,
[1:32:23] including with myself and Henry Kissinger and a few others.
[1:32:27] Kissinger was co-author?
[1:32:28] Yes.
[1:32:29] Yes.
[1:32:29] When did you write?
[1:32:30] He'd been dead for, wait, when did you?
[1:32:34] OK.
[1:32:36] The AI Henry Kissinger.
[1:32:37] OK, fine.
[1:32:40] In fairness to Henry, he finished the book the day
[1:32:43] before he died, right?
[1:32:45] And I encourage you to read it.
[1:32:47] It's his last book of his many, many great books.
[1:32:50] The important thing is thinking about this is really important.
[1:32:52] And in our work, what we said is,
[1:32:55] this conversation needs to occur over and over again.
[1:32:58] Because it is the future of us.
[1:32:59] It is us.
[1:33:00] It is how we operate.
[1:33:01] And I endorse that.
[1:33:03] But at the end of the day, we will get through this.
[1:33:06] And more importantly, we will thrive.
[1:33:08] Do you know why?
[1:33:09] Because at least American capitalism,
[1:33:11] the way it's structured with regulation,
[1:33:13] produces enormous economic and life value for us.
[1:33:17] Think about it.
[1:33:17] Solving climate change, solving the various issues we have.
[1:33:20] And remember, we're depopulating,
[1:33:22] especially because we're not allowing immigrants in,
[1:33:24] which is an error.
[1:33:25] So you get the idea.
[1:33:28] Well, let me see if I can put a bow on this.
[1:33:31] So I try to always generate a cosmic perspective
[1:33:42] on all that happens to me in a day.
[1:33:45] And to get this kind of exposure,
[1:33:47] I don't know how to tie this up in a bow, all of this in one.
[1:33:51] But allow me to say that I've been in computing
[1:33:56] for most of my life, that I've had access to it
[1:34:00] and awareness of it.
[1:34:02] And I remembered the Turing test.
[1:34:05] People said.
[1:34:06] If you interact with a computer and you
[1:34:08] don't know if there's a human on the other side,
[1:34:11] then why distinguish it from being a computer or a human?
[1:34:16] Then pass the Turing test.
[1:34:19] The original research paper was called The Imitation Game,
[1:34:22] which became the title of a movie featuring
[1:34:24] the life of Alan Turing, which I recommend.
[1:34:28] So we're way past that.
[1:34:31] And so I remember thinking, oh, we'll never
[1:34:34] have a computer as good as people at chess.
[1:34:36] We're way past that.
[1:34:39] I remembered when Siri showed up on the iPhone.
[1:34:43] There were whole commercials with, like,
[1:34:45] who's the movie director who was interacting with it?
[1:34:47] Martin Scorsese is talking to Siri.
[1:34:50] And our jaws are dropped open.
[1:34:52] There's this intelligent agent serving him
[1:34:55] in the voice of Siri in his iPhone.
[1:34:58] And I'm saying, wow, the future is here.
[1:35:01] Today, no one talks about Siri as AI.
[1:35:05] We're so past it, it's like, oh, that's
[1:35:07] just old, that's not even interesting.
[1:35:10] AlphaGo, that's yesterday.
[1:35:13] AlphaZero, yesterday.
[1:35:16] So what intrigues me is every one
[1:35:18] of these incremental advances in AI we absorb,
[1:35:23] just becomes part of life.
[1:35:25] And we're looking at that next frontier.
[1:35:28] Yes, the AGI, the artificial general intelligence,
[1:35:33] that's a frontier that may be dangerous, but all the rest,
[1:35:37] we absorb.
[1:35:37] We think nothing of putting on GPS and having
[1:35:42] it tell us the shortest route to grandma's house,
[1:35:45] corrected for traffic, and there's no human in that loop.
[1:35:49] We're all fine with that.
[1:35:52] We're not spooked by it.
[1:35:54] You're not writing to your congressman about it.
[1:35:56] You don't try to legislate it.
[1:35:57] It's just there in our lives.
[1:36:00] So I foresee by 2030, if I will, much more AI just
[1:36:06] folded into our lives.
[1:36:08] But the AI that transcends all of our intelligence, the one
[1:36:14] that you might think is farther away than it might be,
[1:36:18] given the title of this book.
[1:36:21] Starts with F.
[1:36:22] I have to agree with Nate.
[1:36:27] I'm old enough to have lived through a lot of the Cold War.
[1:36:31] You know what kind of ended the Cold War?
[1:36:33] A lot of things contributed, but you know what mattered?
[1:36:36] The realization that if you launch, you're going to have more.
[1:36:39] You're going to have more.
[1:36:39] You're going to have more.
[1:36:39] You're going to have more.
[1:36:39] You're going to have more.
[1:36:39] nuclear weapons, intercontinental ballistic missiles from the Soviet Union into the Western
[1:36:46] Hemisphere, and we see that and we retaliate, everybody dies. We had an acronym for it,
[1:36:56] MAD, Mutual Assured Destruction. That's what brought people to the table. Say, this is insane.
[1:37:03] We have to regulate it. We have to reduce the stockpiling because preserving our species is
[1:37:08] what matters here. So based on that evidence, I have some confidence that as those dangers become
[1:37:19] more and more real, people will come to the table and say, yeah, keep the rest of the AI going.
[1:37:27] We got new medicines and new understandings of our physiology and new technologies that help us
[1:37:32] get smarter and healthier. Yes, but that branch of AI is lethal. We got to do something about that.
[1:37:41] No one should build it, and everyone needs to agree to that by treaty. Treaties are not perfect,
[1:37:48] but they're the best we have as humans. I would end by saying there's a quote from Ray Bradbury,
[1:37:57] who I verified with him. I met him only once, and I just had to verify this quote.
[1:38:02] He said, there's a woman once came up to him and said, science fiction author, Ray Bradbury,
[1:38:05] all right, Mr. Bradbury, why do you write these stories about these apocalyptic futures?
[1:38:13] Is this the future you think humanity is going to inherit? And he says, no,
[1:38:20] I write these apocalyptic futures so you know to avoid them. And that is a cosmic perspective.
[1:38:29] Join me in thanking our panel. Oh, my gosh. Latonya, Kate, Chris, Cindy, Nate, Eric.
[1:38:40] Oh, we found the book. We found it.
Transcribe Any Video or Podcast — Free
Paste a URL and get a full AI-powered transcript in minutes. Try ScribeHawk →