About this transcript: This is a full AI-generated transcript of Doha Debates: Are we heading into a world divided by AI tribes?, published April 21, 2026. The transcript contains 8,064 words with timestamps and was generated using Whisper AI.
"There's no turning back. AI is everywhere, from maps to shopping. Some even get therapy from chatbots. Yet they rarely ask whose expertise those bots reflect. So how will AI shape our collective thinking, our politics, our culture? AI models are said to carry the values of those who have created..."
[0:08] There's no turning back. AI is everywhere, from maps to shopping. Some even get therapy from
[0:14] chatbots. Yet they rarely ask whose expertise those bots reflect. So how will AI shape our
[0:20] collective thinking, our politics, our culture? AI models are said to carry the values of those
[0:26] who have created them. Some have been called libertarian, some woke, others appear to change
[0:32] from day to day or even user to user. I'm Mohamed Hassan and this is the Doha Debates Podcast.
[0:37] You should be asking as to what extent AI will solidify and accelerate the dependence that our
[0:47] society already has on corporations. I think it's the individual versus him or herself. That's where
[0:54] the struggle for freedom will exist in the future. Even though these models are supposed to be open
[1:01] for use, they ultimately come embedded with biases and political ideologies. This idea that there's
[1:07] this existential race between the US and China to get to super intelligence first. This is like a
[1:14] path to planetary suicide. Today's debate, are we heading into a world divided by AI tribes?
[1:20] And is this something to celebrate or fear? I'd like to welcome our guests. James Brousseau is
[1:33] a professor of philosophy and computer science at Pace University. Robert Wright is the author of The
[1:38] God Test, Artificial Intelligence, and Our Coming Cosmic Reckoning. Eleanor Noor is a senior fellow in
[1:44] the Asia program at Carnegie Endowment for International Peace. And Evgeny Morozov is a tech theorist and author.
[1:50] A warm welcome to you all. Thank you. Happy to be here. The four of you are experts in this topic and me,
[1:58] like a lot of people, is just trying to figure out what is going on. So I want us to have a rigorous,
[2:03] high level debate. But at the same time, I'm going to interject from time to time with some really dumb
[2:07] questions. So please allow me. James, if we can begin with you, can you give us a picture of the
[2:15] AI landscape as it exists today? Do you think the direction we're heading in is one of unity or division?
[2:23] Well, the landscape is certainly confused. All of us, I suspect, are somewhat confused about what we see
[2:28] happening around us. But to my mind, the landscape is full of potential. So it seems to me that there
[2:37] are many kinds of positive aspects to the changes we are seeing. There are different ways that we'll
[2:41] be able to receive and understand information. So my general outlook is positive. I will say quickly,
[2:48] it seems to me that the major threat that we are facing now is one of a kind of homogenization
[2:58] that is I'm thinking just of the experience all of us have had. For example, on Netflix,
[3:04] when we see a movie and then enjoy it and we get recommended another which is similar and then another
[3:09] which is similar. I think the same thing goes for LinkedIn, which is proposing job opportunities for us,
[3:16] or for dating websites, which is proposing partners for us. These kinds of tools increasingly force us
[3:24] to accept possibilities which correspond with the data that we have accumulated about our own past.
[3:32] Right? That's how Netflix produces your movie recommendations. They say, well, what have you
[3:36] watched in the past? What have you enjoyed? This is what you're going to get. Now, that can be very
[3:42] satisfying and enjoyable. But the risk, and so to answer the question, I've said there's some positive
[3:47] things. The risk and what worries me about this technology is the possibility for homogenization
[3:55] even with ourselves. That is, it will become impossible for us to escape our own data. The
[4:03] preferences and interests that we have established about ourselves will determine what we see when we
[4:09] turn on the television set, what jobs we are offered, what partners we might receive. So I think that
[4:15] that is the threat. And then finally, I will add, I think that the challenge that waits for us is to
[4:22] find ways to use these tools to help us escape who we are, to help us become new and different kinds of
[4:30] people. And I think that if we can do that, then I believe that the future will be very positive for
[4:36] this technology and for humanity.
[4:38] Elena, if AI is creating a homogenous experience of the way that we see the world, the way we interact
[4:46] with technology, is that not then going to lead to a kind of unity?
[4:50] Yes, potentially. And I think it depends again on the use cases, right? James pointed out some examples.
[4:57] I think one of the examples that has a double-edged sword is these accent neutralization programs
[5:07] that are being churned out by AI. So for example, you have all these business processing organizations
[5:13] outfits in places like the Philippines and in India and Kenya. And on the one hand, they allow for
[5:19] greater work opportunities. You know, this idea that there is dignity with work. At the same time,
[5:26] through this accent neutralization software, it flattens the accents around the world in order for
[5:32] there to be greater efficiency, greater levels of communication, supposedly. But what does it do to
[5:40] a person's identity, a person's culture? The way they speak reflects their history, their experiences.
[5:46] And once you start flattening that, yes, there is some unity of communication, I suppose. It facilitates
[5:52] easy listening and understanding. But at the same time, I think there's a real risk of people losing
[5:58] who they are at the risk of facilitating communication and understanding.
[6:05] Evgeny, what do you think about this idea that our identities themselves are being shaped by this AI?
[6:11] Are we being made to become robots that all think and behave the same way?
[6:16] I mean, look, our identities are shaped by almost anything we use, by any kind of technological or
[6:24] ideological infrastructure. So, of course, AI would be just like any other technology exercising effects
[6:31] upon people who use it. The question is, what kind of effects do we want to exercise upon us in the
[6:38] long term? And what kind of options do we still have in terms of determining the path and the trajectory
[6:45] in which AI systems would travel? Because if we look at earlier technologies, we see a very
[6:53] interesting trend. All of them start as a series of competing ideologies about the good life,
[7:01] what it means to live with the airplane, or what it means to live with the car, or what it means to
[7:07] live with cable television. Those ideologies, however, all get embedded into socioeconomic systems of
[7:15] various kinds, into governance, into corporations, into regulatory bodies. And in some cases, there is some
[7:22] resonance where a certain ideology for individualistic driving or living gets attached to a set of
[7:30] business interests, and it crowds out all the others. And in some cases, like the United States,
[7:36] this car-centric ideology won out. In some cases, like in many European countries, it did not, where you
[7:42] still have the idea of public transport. I don't think that AI is any different. We are still very early in this
[7:48] development cycle. And there are still relatively open pathways that it can travel, depending on what
[7:56] kind of ideology is allowed to win out, and what kind of socioeconomic institutions are in place to be
[8:03] able to balance what would otherwise be the dominant ideology, which in our day is one that favors the
[8:10] interest of corporations. So whether we manage to balance it out would, to a large extent, determine how
[8:17] homogenizing and unifying these technologies would be, because they can have all sorts of effects.
[8:23] And it's still up to us to determine what kind of effects we want it to have.
[8:28] And Robert, what do you think of this idea of ideologies? Obviously, this is a really astute point
[8:34] that the technologists that shape our technologies, their ideologies can and will be imprinted onto these
[8:42] systems. Do you think this is what we're seeing already now with AI?
[8:46] I think the people who build the AIs can, in principle, use them to shape ideologies through
[8:52] the selection of training data, through the selection of the experts who play a role in the post-training
[8:58] phase. A lot depends on which magazines of which ideological slant are part of the training data.
[9:06] But even beyond that, I think there may be a naturally, you could say, tribalizing effect of AI, much as
[9:16] there has been with social media. And for somewhat the same reason, which is that there's an incentive to
[9:23] maximize engagement. That's what the corporations want. They want you to keep using their product. We've already seen that in
[9:30] in the case of social media. That leads to algorithms that reinforce people's conviction that their
[9:37] ideology is right and the other was wrong and feeds them information selectively. And we're already seeing
[9:42] the same thing in AI. And if anything, AI can, in principle, do that in a more kind of microscopic,
[9:50] fine-grained way. And we've seen cases of people's very specific, narrow beliefs
[9:57] being reinforced, even in a pathological way. So people who think maybe they've discovered some
[10:03] secret formula, being encouraged by the AI to believe that, even though it's not true, or to
[10:12] become more convinced that their conspiracy theories are true.
[10:15] AI psychosis, I think, is what we're going to hold it.
[10:18] And it's a real danger that it will build kind of cocoons for people to live in that are even narrower
[10:25] than our social media cocoons. James, what do you think?
[10:28] You know, let me try two different ideologies on for size here. One ideology is the sort of
[10:36] Heideggerian early 20th century ideology, which is that what we are looking for as human beings
[10:43] is some kind of authenticity. That is some sense of who we are, which is stable and solid,
[10:48] in which we can locate and cling to. If we want to go that route, if authenticity is the
[10:55] ideology we want to follow, then it seems to me that some of these problems, which you're
[10:59] certainly right to point out, Robert, but some of these problems are symptoms of a kind of strength
[11:04] that the AI does help us find for ourselves, each one of us, who we are, in some ways that are trivial,
[11:11] going back to my example of Netflix movies, but in some ways which are not trivial. For example,
[11:15] if the AI consistently finds for me on a dating website, the same kind of partner, then perhaps
[11:21] that's who I am. And I should cling to that, hold on to that. So that's one ideology. I think AI can
[11:27] serve that ideology. But there's also a different possible ideology, which is that freedom is more
[11:33] important than authenticity. And in that case, what we would want our artificial intelligence to do,
[11:39] and I personally, I will say I argue for this, what we would want our artificial intelligence to do
[11:44] is, going back to your point about languages, is precisely break those kinds of uniformities,
[11:51] those kinds of generic realities that AI can sometimes force us into, and help us discover
[11:58] new ways of living and existing. So I think that's, those two ideologies are fundamentally,
[12:05] it's one way of thinking about this. Fundamentally, what is in play in artificial intelligence is whether
[12:11] we want to be who we are, and AI will help us find that, or we want the ability to choose different
[12:18] kinds of futures for ourselves, ones that are unknown to who we are now.
[12:22] Alina, what do you think?
[12:23] Alina Nguyen- So that's a really interesting tension that you brought up. And I think there's
[12:28] an overlap as well between authenticity and freedom, because in finding authenticity, you need that
[12:34] freedom as well. Whether AI can help us find who we really are, I think is a more questionable proposition,
[12:42] simply because a lot of the systems are built in global North countries, and that are sold to us in
[12:53] the global majority. And a lot of that data as well, has a bias towards English language, data training
[13:03] sets, as well as the ideologies that go with that. So for example, in Indonesia, Stanford Raffles,
[13:12] Thomas Stanford Raffles was once the administrator of Java, and he wrote the book, The History of Java.
[13:19] Now, I'm willing to bet, I don't speak Javanese, but I'm willing to bet that his translation and
[13:24] interpretation of the Javanese people is slightly, maybe even dramatically different from how the
[13:31] Javanese would describe themselves. But he, of course, had a different objective to writing the
[13:37] history of Java. And it was to tell London at the time that, look, these Javanese people, they are of
[13:44] a certain civilization. They are civilized because they have hierarchy in their structure. And this is
[13:51] why perhaps we should not impose such a harsh rule over them. And I don't think the Javanese saw themselves
[13:57] that way. And so this idea of who we are authentically is partly in that search for freedom.
[14:07] But I don't think at this point in time, at this developmental stage, understanding full well,
[14:12] Evgeny's point, that we can shape it right now as it is given the bias in the training data and the
[14:19] systems of deployment. It's not particularly favorable to people in the global majority to find that
[14:25] authenticity when there are narratives about them that have been told by others and that continue
[14:30] to float around in the ether. Can I just ask you one quick question? Just a quick thought. Do you think
[14:35] AI is fundamentally about societies and communities? Or do you think it's fundamentally about individuals
[14:42] in their lives? Because it seems that I was sort of talking about how individuals live. That's a good
[14:46] question. And I assume each of you will have a different idea. Right. And I think that you're coming
[14:51] at this from how societies are. So I wonder what you think about that, about what AI is,
[14:55] where it goes fundamentally. Yeah, my instinct is that it's a tool for society. But obviously,
[15:02] the emphasis on whether it is something for individuals or for a community per se is going
[15:07] to depend on the context in which it is deployed. Evgeny, what do you think about that? Yeah, no,
[15:12] I just want to make a provocation maybe in that I think that AI in itself does nothing. It's an empty
[15:19] concept which is up to us to fill with meaning. So in that sense, if you really want to stick to the
[15:25] dichotomy of authenticity versus freedom, then we should push it to the ultimate conclusions,
[15:30] meaning that we should have the freedom to fill in that concept with the meaning that all of us as
[15:36] individuals or communities would like to fill that with. Right. And in that sense, I don't know why
[15:42] we would opt for freedom for us as individuals, but we would accept that there is some kind of
[15:47] essentialism or authenticity about AI that it should be about individuals or it should be about
[15:53] communities. It should not be about anything. It's essentially a very open-ended plastic set
[15:59] of technologies, which depending on billions of dollars that are now going to flow into this might
[16:06] go one direction or it might go in a completely different direction. We've seen that with cable
[16:11] television. We've seen it with the internet. We've seen it with a lot of other technologies where
[16:15] there was a lot of rhetoric about how these technologies will develop on their own because
[16:20] they have some inherent properties, but the institutional support was not in place.
[16:26] And they ended up following the path of least resistance, which was an agenda favorable to
[16:31] companies that were well positioned to take advantage of those technologies and monetize them.
[16:35] It's all up to us and up to institutions, communities, politics, and political struggles to define
[16:41] how all of these ingredients should fit together and who should pay the bills. Should it be individuals
[16:46] paying a monthly subscription fee? Should it be some kind of an intermediate institution like we have
[16:52] with libraries, which did create very utopian spaces, which suspended copyright law temporarily
[16:59] so that people can have wider access to literature? Why aren't we thinking about alternative institutional models?
[17:05] Robert, what do you think of this idea of the plasticity of AI? Would you agree with it?
[17:10] In principle, AI can be anything we want in a couple of senses. First of all, in principle,
[17:17] societies, nations, even the whole world can get together and declare policies and constrain it in
[17:24] that way. Moreover, individuals can choose models they like with specific features they like. In principle,
[17:32] you could offer consumers a version of the model that will make them mindful,
[17:39] which I'd like to be more mindful than I am. It's not easy. It wouldn't surprise me if you have
[17:49] different religions offering different kinds of models. In principle, the range is unlimited, but
[17:58] I don't think we should act as if it's going to be easy to secure our freedom
[18:04] in the face of AI because the makers of the AIs are going to have their own agendas. In the case of
[18:11] corporations, again, that's to keep us using the AI. And AIs are going to be better and better at
[18:18] manipulating us, at figuring out what we like, what will make us come back to the AI. I mean, there's already
[18:27] been a study showing that AIs were better than people at persuading other people to change their
[18:35] minds about things. And in particular, the AIs were better if they were given access to personal data
[18:42] about the people they were trying to persuade, then they really excelled, okay? And when you remember
[18:48] that an AI can, in principle, search your whole social media history, immediately come to know you
[18:54] very well, and then set out to persuade you of something, like to keep using it, right? Like
[18:59] to push whatever buttons of yours will keep you using it, you know, that's a lot of power. And I
[19:05] think if we're going to remain free amid the AI revolution, we're going to have to try consciously
[19:12] to cultivate, you know, what some people are calling cognitive sovereignty, you know, and be able to resist
[19:21] manipulative tendencies that I think inevitably will be emanating from the technology.
[19:27] James, this question of freedom is really key here. How do we know that we can trust these systems
[19:32] that we're using? Terrifically provocative. It seems to me that this is a critical point. In the past,
[19:39] we have been enslaved. We have been limited. We've been restrained by others, by other people,
[19:47] by other institutions. But today, the restraint, the limitation, even in some cases, the slavery,
[19:54] sense of nudges, all of that is something that we do to ourselves. We are controlled by our own data,
[20:03] our own past. That's what the AI is manipulating. We cannot be nudged, I cannot be nudged by her data,
[20:09] or by your data. What the AI uses to control what charity I want to donate to, or what person I want
[20:17] to find to date, what the AI uses to control are my own preferences and my own interests. So, it's a
[20:23] very different notion of freedom we have now. Freedom is a struggle that I have with myself and with my
[20:28] own past, as opposed to what it used to be, a struggle I had with institutions or with others.
[20:35] So, then, I think I stand a little bit contrary to your view. It seems to me that we live in,
[20:40] we're entering a post-institutional age. And I think that, for the reasons that you were stating,
[20:47] everyone can now, or very soon, we'll be able to form their own language model, which is tailored
[20:54] to their own preferences and their own aspirations. In the case of language, there are many people working
[21:00] now, and as you know, of course, better than I do, many people working now to create new models,
[21:05] which do reflect the Javanese language and culture and so on. So, we're going from one general language
[21:11] to specific languages. And I think we'll continue moving down toward one person. Each of us will have
[21:16] our own large language model with our own preferences, our own interests. And so, that's where... So, I don't
[21:23] think it's the individual versus the institution. I think it's the individual versus him or her self.
[21:30] That's where the struggle for freedom will exist in the future.
[21:33] Evgeny, is AI, in your mind, liberating us from these power structures?
[21:38] No, no. First of all, I don't agree that we're entering any post-institutional landscape. I think we
[21:44] are actually living in a hyper-institutional landscape. And the institutions that primarily
[21:48] dominate our lives are corporations and one matter institution that I would call capital. And the data
[21:55] that AI essentially rides on is extracted from me, not because I make these decisions, but because
[22:02] there are institutional pressures by banks, by financial companies, by the fact that they need
[22:07] to have a credit history, by the fact that our parallel welfare system of some kind expects that
[22:13] I need to be using digital services because they're heavily subsidized by venture capital or sovereign
[22:19] wealth funds. And it would be naive of us to say that the pressures on a billionaire in Silicon Valley,
[22:24] who can be off-grid and send their kids to a school where there are no devices, are the same as on
[22:30] someone who is earning minimum wage and has to juggle all these apps to essentially stay afloat.
[22:35] Of course, these pressures are very different. And that pressure comes from abstract institutions,
[22:40] maybe. Maybe it's the wage that's exercising on the fact that you have a stomach and you need to
[22:45] eat. But those pressures are real. They're not fake. So we cannot do anything we want just because there is
[22:50] suddenly a series of AI apps that allow us to be hyperproductive. I mean, there are still constraints
[22:56] and pressures. So in that sense, I don't see us liberating ourselves from institutions anytime soon.
[23:03] But beyond that, again, the question that we should be asking is to what extent AI will solidify and
[23:10] accelerate the dependence that our society already has on corporations. I mean, we used to have institutions,
[23:18] like museums, libraries, media, and so forth, that had some independence from both the state, at least
[23:27] like theoretically, and the corporations. Right now, if you look at what's left of those institutions,
[23:33] they by and large survive on donations from tech companies. And many of them have been replaced by
[23:39] them entirely. And in that sense, AI will accelerate that process. And we will end up in a world where
[23:47] the previous setup, where at least there was some pluralism and the opposing logic of those
[23:52] institutions were clashing with each other and allowed some space for the individuals and citizens
[23:57] to flourish, will just be oriented towards serving the needs of one sector. And that sector will be big
[24:02] tech. I think Evgeny's point is perfectly valid, though, at the end of the day, we're still very
[24:07] dependent on only a few actors, primarily within the global north. We're dependent on them for
[24:13] infrastructure for standards, technical and policy standards that determine what we can do, what
[24:19] types of chips we can use in order to create these much more localized language models, for example.
[24:26] So at the end of the day, I think so many of us in the world are ultimately beholden to just a few
[24:31] powerful players, corporations essentially, that determine how much sovereignty, how much freedom we
[24:37] can exercise, even with the little crumbs that they throw us in terms of, you know, what sort of
[24:43] programs we can create with AI, for example. In the nation state of Tuvalu is the first digital nation
[24:53] in the world, because it is at the precipice of an existential crisis due to the climate emergency. And a lot of
[25:01] the people of Tuvalu have decided they're going to migrate their whole nation state onto the digital
[25:07] platform in order to preserve what's remaining of their culture and traditions so that future generations
[25:14] can understand who they are in terms of their identity. But ultimately, whether they succeed or not
[25:19] in creating a digital nation is dependent on what sort of infrastructure they can access, the kinds of
[25:26] submarine cables, the kinds of chips to power the data centers, and the kinds of training data they're
[25:32] able to collect themselves. And so this gets to the point that I think both James and Yevgeny are right,
[25:39] we're just looking at it at different ends of the spectrum. And do you think this technology can allow
[25:45] these nations, sometimes very small nations, to develop their own sense of autonomy, their own sense
[25:49] of sovereignty, not only over their future, but also over their past? Yes, yes, absolutely. But then we
[25:56] get back to the basic question of, who does all this infrastructure belong to? And what sort of
[26:01] standards are they setting for the rest of us in order to access their technologies? We talk about
[26:06] open models, Bob, you mentioned that, right? You look at the AI action plan that the White House put out
[26:12] earlier this year, it talks a lot about open models. But at the same time, Washington is trying to market
[26:20] an American AI stack with American values. So they seem to be completely conflicting. Yet at the same
[26:26] time, these are kind of the options that we're left with. There are open models, but we have no access
[26:31] to absolutely no transparency of their training data. So we have no idea of the biases, no accountability
[26:38] in that sense. And even though these models are supposed to be open for use, they ultimately come
[26:45] embedded with biases and political ideologies that many of us may not even be aware of.
[26:52] Is there a way to liberate ourselves from these biases or at least be aware of them when we're
[26:56] using these systems? Well, I think in terms of national bias, there is. I think the open source or open
[27:05] weights models that China in particular is focusing on are not very far at all behind the proprietary
[27:15] models that the US is marketing. And if I'm a nation that doesn't want to be dominated by American
[27:25] decisions about what a large language model should be and what its training data should be, and I do
[27:31] think it's outrageous that they don't even tell us what the training data is, that they don't even
[27:35] disclose what journals it did and didn't train on and so on. But if I'm a nation that wants to be
[27:45] liberated from that kind of bias, I'm going to find these open source models very appealing because
[27:52] you can kind of retrain them to whatever extent you want and actually change the so-called weights,
[27:59] which is what determines how they work inside and in effect create a customized model. Now, at the
[28:07] individual level, that's more challenging. I mean, most people don't have the wherewithal
[28:12] to customize an open source model to them. James, what do you think of that?
[28:17] Are these teething issues? I myself believe that the technology is moving so rapidly that working
[28:25] outside the system is not practical. And so, in this way, also, I'm post-institutionalist. I believe
[28:31] that even if we wanted institutions to be powerful and to be helpful, they could not be. Let me give you
[28:37] just a quick example. So, I live half my life in New York City and the other half my life in Trento,
[28:42] Italy. I teach AI ethics in both of those two places. But because I'm in Italy, I've seen quite
[28:48] a bit of what the Europeans are doing with respect to laws. And one very important law for them was this
[28:54] AI Act, which was this 18-month or two-year work to construct this regulating structure that was going to
[29:01] control AI finally and at last. And just as they were rolling it out, what happened? ChatGPT appeared.
[29:08] So, we have this institutional rule. It's about to be imposed, rolled out, and rolled right back,
[29:15] because ChatGPT rendered the whole thing irrelevant. So, it seems to me that the idea of institutions
[29:21] or working outside the system will simply not work in a context of this velocity. Instead,
[29:28] these kinds of problems that we're facing can be managed or perhaps we can only manage them
[29:33] by working inside the system. So, I think that's possible.
[29:36] I disagree with everything in this statement in that this is a false dichotomy that you're drawing,
[29:43] first of all, between regulation on the one hand and some kind of innovation on the other. First of all,
[29:49] the US regulates heavily. Look at all sorts of rules the US imposes on the rest of the world when it
[29:56] comes to experts of AI. The US regulates more heavily than anyone else what actually the rest of the
[30:03] world is allowed to do with technology. Who has access to chips? Who has access to what kinds of
[30:08] chips? The US is not by any means a lesser fair player. Second, Europe is not a good example to follow,
[30:17] in part because Europe's own hands have been tied by its dedication to some kind of neoliberal ideology
[30:25] which prevented that from building institutions that could balance and counterbalance actually what
[30:32] the market does beyond just regulation, beyond just creating laws. You can be creating massive
[30:37] public infrastructures that would actually be competing with the tech providers that are private and
[30:43] that come from Silicon Valley. Europe has not done that because it has been structurally prohibited
[30:48] by its own laws from building up its own tech industry. You're not allowed to do state aid in Europe and
[30:55] unfortunately that's how it is. China on the other hand has been regulating heavily and combined it with
[31:01] active nurturing of its domestic AI industry and we see the results. It's the only country that actually has
[31:08] something to show against the United States. It has a tech industry that actually competes with it and it's
[31:15] doing pretty well pursuing the regulatory path that we are told cannot actually work. But beyond that I would
[31:21] also be very cautious about thinking that some kind of technocratic fine-tuning utopia where we just
[31:29] thinker with the weights of the model is going to save us in part because much still boils down to the
[31:35] training data. And the training data is a very expensive project to undertake. And even if you play
[31:41] with the Chinese models you will see that despite the fact that they're open source and they're open
[31:46] weights, what ends up being is that they still have the same ideological biases that the American models
[31:52] because they're trained on the same set of pirated blocks that the Chinese companies got off the same
[31:57] torrent sites that the American companies, right? And if you want to have a counter project you really need to
[32:03] undertake a massive government-funded effort of going and finding alternative data sets. It might be data in
[32:10] your libraries, it might be that you will need to go and digitize millions of books in your own language
[32:15] as opposed to relying on the English language books that are traveling around the internet, but you will
[32:20] need to have some kind of an effort to do that. And by the way, even the market players who have trained their
[32:27] models with that data from torrent websites, they essentially leveraged the volunteering spirit of
[32:34] people who have been scanning books, digitizing them and uploading them online. They did not incur those
[32:39] costs on their own. But to think that we will be able to solve that through some kind of a technical
[32:45] fine tuning of the model, I think is very unrealistic. And in that sense, we do need to think about
[32:51] institutions and not just regulations and laws. Regulations and laws can be very useful if they're
[32:58] tied to an institutional vision. Elena, what do you think? So James, I would just offer that so many of
[33:04] us in the global majority work within constraints and we have no choice but to fine tune because of the
[33:10] sheer cost of building your own foundational models that Evgeny has been talking about so much. And I think
[33:17] these smaller models, they offer some degree of autonomy, but going back to the earlier points
[33:23] about how we would still be dependent on these few large players as a completely unequal market
[33:31] concentration of power in terms of who gets what, right? And so, you know, even fundamental concepts
[33:38] like time and space, for example, that are embedded in the training data of foundational models, we have no
[33:45] eyes on because there's zero transparency. So for indigenous communities, for example, that think
[33:51] of time and space as completely inseparable concepts, for them to rely on foundational models with completely
[34:01] unclear, untransparent training data is going to change how they even fine tune these models for
[34:08] themselves. And so for many of us, we have no choice but to work within these constraints. And so I'm
[34:14] sympathetic to Evgeny's argument that perhaps we just need to flip the tables and recreate
[34:20] institutions that work for the majority of us. Is AI then a solution to that in your eyes, James? Can AI,
[34:28] can we train AI to then liberate itself from these constraints? Right. So one of the projects I'm working
[34:34] on is called Curiosity or the Curiosity Engine. And so what we're trying to do with a number of students, myself,
[34:41] is we are trying to work with recommenders. So again, what kind of jobs you're recommended,
[34:46] what kind of language you might be recommended, and so on. So we're trying to find a way that
[34:51] recommendations, which is one important aspect of AI, can lead away from, again, the individual's
[34:58] personal data set and interests. And I think that that kind of project could be mimicked on the larger
[35:04] social level, find a way to work again within the system to lead away from these the Silicon Valley
[35:12] data set. I think that's the challenge. So the question is, how do we address that challenge best?
[35:17] I suppose here, we just come down to personal preference. I'm fundamentally an individualist.
[35:22] And so I'm going to be prejudiced towards saying that there's an individual path to these solutions.
[35:27] Yeah, I certainly agree with you that we should only resort to legislation when that's necessary.
[35:34] And a lot of problems you can solve without that. I would like to point to a category of problems
[35:41] that are, I think, are going to call for policy solutions. And they're going to call for policy
[35:48] solutions at the international level. And I think this ultimately gets back to
[35:53] the challenge of the technology being kind of tribalizing at the national level. I mean,
[36:01] if you look at something like the threat of biological weapons, okay, there's no doubt
[36:10] that these models are going to make it easier for people to do various nefarious things. Individuals and
[36:17] small groups of people make biological weapons and so on. That problem is particularly
[36:23] queued actually for open source models. And it's why some people think that they shouldn't
[36:28] be developed. But in any event, that's not a problem you can solve through national policy.
[36:32] I think we'd all agree that preventing a global epidemic caused by a bioweapon is a valid policy
[36:38] goal, but it's not a policy goal you can address at the national level. And very broadly, I think,
[36:45] if we don't confront the challenge of AI as a cohesive global community, we are in deep trouble.
[36:51] And so if it turns out that the technology itself empowers nationalism, that could be a kind of
[37:01] paradoxical and very deep problem. And right now, much of Silicon Valley is wed to this idea that
[37:07] there's this existential race between the US and China to get to super intelligence first. I think
[37:14] this is the absolute worst environment in which to confront the AI revolution. This is like a path
[37:21] to planetary suicide. And the dynamic is very strong. The impetus behind it is very strong. I mean,
[37:29] the, you know, 10, 20 years ago, kind of China hawkism was not the kind of de facto ideology in
[37:40] Washington. It's become that for more than one reason. But lately, one big reason has been that
[37:46] so many big Silicon Valley players like OpenAI want Americans to be as afraid as possible of China,
[37:53] so that they can say that the American government should not regulate OpenAI or put any constraints
[38:00] on it whatsoever, and indeed should subsidize its energy and, and whatever else.
[38:04] Now, there are bright sparks in the innovation in terms of the governance models around AI that we're
[38:12] seeing, particularly with the indigenous communities, where they are challenging some ideas that are out
[38:20] there about what it means to govern AI in a space that prioritizes exploitation and extraction, particularly
[38:29] of data, but also other natural resources. And the Maori community in particular have decided that
[38:36] they're going to have their own governance models of their own large language models because of the
[38:42] history of having been exploited. And I think there are lessons to be drawn here for many of us.
[38:48] When we think about these indigenous models that we're seeing growing, does that then present a future
[38:53] in your mind of the potential for AI to allow these different communities to create their own tribes,
[39:00] their own spaces, that they then can express that autonomy, that sense of sovereignty in?
[39:06] Yes. And there are certainly some really positive outcomes that will result from that sort of
[39:13] innovation. At the same time, for many multi-ethnic communities and nation states, it does present a bit
[39:20] of a risk because it means a fragmentation of the very social fabric of a nation. And I'm thinking
[39:26] here of a lot of countries in Southeast Asia, but also in Africa, where, for example, there are local
[39:33] language models that are being built in terms of ethnic languages. But the history of these countries,
[39:41] as in Indonesia, as in Malaysia, has been built around unifying people through a single language.
[39:48] And so when you have this splintering of innovation, of models that are meant to be
[39:54] ethical in terms of how they cater to local communities, there is the larger question of what
[39:59] does that mean for national unity? And how do we bring those contradicting tensions together?
[40:05] This is a point about nationalism, which you brought up before, Robert. How do you see this happening?
[40:10] There's obviously a driver now for nationalism and nationalistic pride, especially when it comes to things
[40:17] such as the defense industry, which we're seeing a lot of companies in Silicon Valley voluntarily
[40:22] stepping into. And there's this race now between the U.S. and China. Do you think that we can be
[40:30] getting these models that are reinforcing these identities? And is that a good thing or a bad thing?
[40:36] You know, nationalism is always dangerous. And international polarization is always dangerous.
[40:43] But I think in a way it's going to become more dangerous than it's ever been by virtue of AI.
[40:49] I really think this technology is going to be so powerfully dislocating across a number of dimensions
[40:59] and in many different ways call for a concerted international response, that if we don't become
[41:06] more of a global community than we've ever been before, we're in really deep trouble.
[41:11] James, do you have this concern that AI might entrench us in our already divided lines, whether
[41:16] it's national or race?
[41:18] Right. So I think it's becoming fairly clear from our discussion so far that I start almost all of
[41:24] my thought from the individual level. So I would say two things. I would say one, almost certainly,
[41:31] that's true. We're seeing clearly how filter bubbles are forming. We're seeing clearly how polarization
[41:37] increases. We're seeing clearly how people are being directed into narrower and narrower affinity
[41:41] groups. That's just undeniable on the ground. But again, going back to sort of where we started,
[41:46] it seems to me that the answer to that question is one for AI companies to work with individuals to
[41:53] help them find new ways of thinking about the world, which would be not moving from one national
[41:58] structure to another national structure or from one language to another indigenous language.
[42:03] Instead, I think it would mean for individuals to find their own way to a kind of language that they
[42:07] might want. For example, perhaps I might decide that I want to learn Italian. I want to live like
[42:11] the Italians. Now, I have no Italian past. I have no affinity with the Italians. I have no indigenous
[42:17] connection with that. But why can't I become Italian as opposed to becoming, recreating sort of the roots
[42:25] I had with my, it happens, my ancestry is Swedish and Danish. And I could go back. There's things called
[42:32] lutefisk that we could eat and there's all kinds of specific things we could do. But I could just go
[42:37] forward too. I could use artificial intelligence, the kind of tools that it offers to help me gain entrance
[42:42] to a different kind of culture. So I think in that way too, this question about nationalities is shattering.
[42:50] And instead it will be up for individuals to find their own place and their own destiny in this future.
[42:55] And I think one hope generically about kind of the formation of new tribes that are not national
[43:01] tribes is that some of them will be first of all benign, you know, not bad and international. They
[43:06] will cut across national bounds and so form commonalities of interests within multiple nations.
[43:14] And so kind of erode the power of nationalism. I wonder what you think about this path forward,
[43:21] you know, this, this potential for division, this potential for autonomy and the potential
[43:28] and the reality of the acceleration at which this technology is going.
[43:32] Yeah. Well, I, unlike James, do not start from the individual analysis. So I look at the actually
[43:40] existing tech companies and not some hypothetical firms out there. The actually existing tech companies
[43:49] all go and their CEOs all go and genuflect in front of Donald Trump. They have funded half of his
[43:56] White House reconstruction. They are part and parcel of the nationalistic and you might even say
[44:03] jingoistic agenda of the United States. So the idea that I would want to have more of this
[44:08] companies intermediates my life to help me learn Italian. No, thank you. I became an Italian citizen,
[44:14] learned Italian. I'm doing fine. I don't need Sam Alton helping me along the way. By the way,
[44:18] all of that is true. So in that sense, no, you know, I just don't know what universe do we live in. I
[44:23] mean, these are real actors. Nvidia is soon worth maybe already worth $5 trillion. It's a company that's
[44:30] deeply embedded now in Trump's project for redefining the American hegemony and the American empire in the
[44:37] world. How can we possibly be making this hypothetical statements about what these companies might do to
[44:44] create a more global universe? They are all residents of the United States. We have seen that despite the
[44:51] early statements, even during the first Trump administration, they all ended up playing up to
[44:57] Donald Trump's tune. I mean, I still remember in 2016 and 17 when people like Sergey Brin were going to San
[45:05] Francisco airport to protest against the first ban on the Muslim countries. I mean, now Sergey Brin will
[45:12] probably be transporting people to the airport if Donald Trump asked him to. You know, this is where
[45:17] we are. These are actually existing tech companies and they're the ones building and rolling out AI. Do I
[45:24] want to have this powerful set of infrastructures in their hands or do I want to redistribute control over
[45:30] them more widely? Of course, it's the latter. Why would I be not being an American citizen? Why would I want
[45:35] them to dominate it and control it and make it part of this somewhat insane Trumpian project?
[45:41] And then I want to give you the final word here. When it comes to, you know, you've talked about the need
[45:48] for caution, the need to potentially stop, but then there's also the realities of what is actually
[45:52] happening in this technology. And in some level, it's very difficult to actually put a stop to it.
[45:57] So from your perspective, how do we move forward from here? How do we protect ourselves from the
[46:03] worst risks that we already see while also unlocking its potential? I think it's really good to have
[46:08] innovation depending on what the particular use cases are and depending on the context in which
[46:14] these innovative solutions are deployed. That said, you know, we've talked about kind of the
[46:19] communalism and nationalism that AI can agenda. But I think to Yevgeny's point as well, there is a real
[46:26] risk of ideological tribalism as well. So on the one hand, yes, you democratize AI for communities that
[46:33] are underrepresented, languages that are underrepresented. At the same time, because of the structure of
[46:40] the economy that we operate in, that enables this sort of AI innovation at large around the world,
[46:48] we are inadvertently sucked into this ideological tribalization of, you know, this sort of
[46:55] capitalism being the only way forward in order for us to quote, unquote, innovate. We think of
[47:01] innovation, particularly in my part of the world, as an economic driver. But that also means taking into
[47:07] account conventional economic theory, where things like the environment and labor are quote, unquote,
[47:13] externalities. And I don't think we can afford to fall back on some of these conventional ideas
[47:18] anymore, precisely because the sorts of innovation that are being called for are going to require a
[47:25] greater lateral sort of thinking.
[47:27] Very well put. And before I lose you guys, I have to ask this, and I know it's a personal question,
[47:34] but I would love to know what is your LLM of choice? Is there one that you go to? Is there
[47:39] an AI tribe that you feel like you already exist in?
[47:42] I was ChatGPT, but I'm a believer in Gemini 2.5.
[47:46] Hmm. Robert?
[47:47] I actually, maybe I should be ashamed of this, but I pay $20 a month to Google and OpenAI and
[47:54] Anthropic.
[47:55] Elena, what about you?
[47:56] I refuse to participate in the subscription as a service model. And so true to my advocacy of
[48:03] context mattering, I play around with different models and I pick and choose as to which solution
[48:10] is best delivered for my particular context.
[48:12] And Evgeny, what about you?
[48:13] Okay, so I have to make a confession. So here it will be a real AI Alcoholics Anonymous kind of setting.
[48:19] So I have three max Claude accounts for which I'm paying probably $600 or $700 a month.
[48:26] I have a Gemini Ultra account and I have GPT Top accounts. So it all ends up to about $1,000 a month.
[48:33] Wow. You are the ideal AI user.
[48:38] Everything reverses in the last year.
[48:41] I mean, I live, like I spend most of my day inside those models. If you really want to innovate,
[48:46] somebody has to pay for that. It's not just low-ing fruit that you can just go and get on the cheap.
[48:51] And the difference is immense.
[48:53] Evgeny, Elena, Robert, and James, I want to thank you for this very lively debate.
[48:58] I've personally learned a lot and I hope that you've joined us on this conversation.
[49:02] We would love to hear your thoughts, so please share them.
[49:05] My name is Mohamed Hassan and this has been the Doha Debates Podcast.
Transcribe Any Video or Podcast — Free
Paste a URL and get a full AI-powered transcript in minutes. Try ScribeHawk →