About this transcript: This is a full AI-generated transcript of Is it Time to Tax the Robots? — BBC News, published April 17, 2026. The transcript contains 4,303 words with timestamps and was generated using Whisper AI.
"Hello, welcome to AI Decoded. Sam Altman is one of the most prominent voices in AI and in the last week he published a blueprint of what governments need to do before it's too late. That plan includes proposals for new robot taxes, a four-day working week, a public wealth fund giving every citizen..."
[0:07] Hello, welcome to AI Decoded. Sam Altman is one of the most prominent voices in AI and in the last
[0:15] week he published a blueprint of what governments need to do before it's too late. That plan
[0:21] includes proposals for new robot taxes, a four-day working week, a public wealth fund giving every
[0:28] citizen a stake in AI's future and containment plans for AI systems that, in his own words,
[0:35] cannot be switched off. The man betting everything on this technology is telling the world that
[0:41] capitalism as we know it won't be enough to handle what's coming. Is he right? Also on the program,
[0:49] the robots that are already doing jobs too dangerous for humans and the AI device that
[0:55] saved a 19-year-old's life at three o'clock in the morning. We have the perfect panel this week.
[1:01] Gary Marcus is here, cognitive scientist, author, one of the most prominent voices in AI.
[1:08] Sarah Bernardini designs and builds intelligent autonomous systems at Oxford University.
[1:15] And Laura Lewington is here with us. She's been reporting on technology for the BBC
[1:18] for longer than she cared to have. Welcome to the program. Thank you. You're going to hack age better
[1:23] than experience. Exactly so. Gary, when the CEO of OpenAI, one of the most powerful AI companies on
[1:30] earth, publishes a document that says capitalism, as we know it, will not survive what he's building,
[1:38] how seriously should we take that? Well, I don't know if people read Ronan Farrow's
[1:43] profile of him in The New Yorker, but it painted him as a bit of a sociopath who will tell anybody
[1:48] exactly what they want to hear, but he won't stick to it. I've had first-hand experience of that sitting
[1:54] next to him at the U.S. Senate, in which he told me he loved my ideas about regulation, and then
[1:59] his company was lobbying against those ideas privately. You have to remember that, first and
[2:05] foremost, Altman is a politician. This is a political play at optics. Some of what's in there
[2:12] would actually be good for humanity. For example, if we had attacks on big AI companies, that might
[2:17] actually be a good thing. But whether we'll live to see it is a totally different thing. So, you know,
[2:23] he's often talked about things like universal basic income, but in the reality, they take the IP
[2:29] of artists and writers and creators, don't compensate them at all. And if that's any sign of what's to
[2:34] come, they will fight any kind of regulation that actually requires them to redistribute their profits
[2:40] in any way. Come back to me on that point, though, Gary, about the blueprint and what is in it, whether or not he's
[2:46] genuine. And I've read the article you wrote in The Guardian soon after that Senate appearance and you
[2:52] question whether he is this advocate for selfless, safety conscious AI. But let's presuming that he's had an epiphany and
[3:01] he sees what he's building and the risk that it poses. Is the blueprint he presents worth taking seriously?
[3:08] I mean, it's just a huge assumption to imagine him having this epiphany given all that we have seen.
[3:15] Again, people could read my article in The Guardian and Ronan Farrow's in The New Yorker to understand why I think the probability of that is zero, but I will take your hypothetical.
[3:24] Some of it would be good for society and some wouldn't. Part of the problem is, for example, his partner and president is giving money to MAGA to support Trump, who is anti-regulatory.
[3:37] And so what would happen is these ideas would come as a selection that would be whittled down.
[3:44] So the one that says government supports infrastructure of AI is getting support and will get support.
[3:50] The one that says that we're going to tax these companies more would die on the vine.
[3:54] It would never actually happen, at least under the current administration.
[3:57] But he does sit in the White House, doesn't he, Gary? I mean, he is respected by Trump.
[4:01] He sits within Homeland Security. So he has a voice in this.
[4:05] Sure. And his partner will say, yeah, but don't really institute this tax thing.
[4:11] Right. So I mean, this is what I have seen is two faced behavior over and over again.
[4:16] He said certain things in the Senate like one was we believe in that the artist should get a cut.
[4:24] Absolutely. Sam said that sitting next to me, told the U.S. Senate that.
[4:28] And then they have been systematically asking places like your House of Lords to give them an exemption from copyright.
[4:34] So what they say in public and what they do in private or in closed chambers and so forth is often not the same.
[4:41] Let's take each of the constituent parts of the blueprint for a second, Sara, and maybe analyze whether on paper this looks good.
[4:48] He is proposing robot taxes on automated labor. Now you build these machines.
[4:54] You help build these machines. What would that look like in practice? How do you tax a robot?
[4:59] In this case about taxing robots, I would have to agree with him because, of course, advanced robotics and AI will shift income away from labor and towards capital.
[5:15] As well as productivity increases, thanks to AI and robotics, I think that workers will have to benefit from that, too, and not only the asset owners.
[5:27] So, for example, perhaps by having higher compensations overall and maybe shorter week.
[5:36] And so, you know, not only higher salaries, but probably better pensions, you know, I think that if we find a way for...
[5:46] But it's more fundamental than that, isn't it? Because if... I mean, where does the wealth creation come from?
[5:52] If machines are doing the work and paying no tax, how does society pay for itself in the brave new world?
[5:59] Because you've got a welfare state that needs to be funded in Europe.
[6:04] The tax base that supports that is being hollowed out by robotics.
[6:08] So it's not about paying for the workers, is it? It's about maintaining the standard of life that we have.
[6:15] Yeah, but I guess we can see that there is a concentration of these assets in the hands of just a few companies and, you know, fewer and fewer people.
[6:26] So I guess if we tax much more heavily these owners, we can then redistribute the wealth a bit more.
[6:36] Sam Olman also tried to counter this a bit with the idea of the four-day week with people still having the same salary.
[6:42] The people were feeling massively overworked by their use of AI within their jobs.
[6:46] So actually, they go really hard for four days, get paid the same.
[6:49] So you've still got that same income tax coming in from their salaries.
[6:53] But of course, Silicon Valley is very full of a lot of words as well as a lot of technology.
[6:57] And it's often a real absolutism of this is exactly what's going to happen.
[7:01] And you hear about UBI, even HBI, and this idea of the age of abundance and we're going to have everything we want and all of this wonder is possible.
[7:10] But of course, people don't really believe that.
[7:12] And I actually think in what Altman said that there were elements that made it a little bit more human and a little bit more understandable.
[7:18] So whilst I'm not saying it's...
[7:20] In what respect?
[7:21] Well, the idea of the four-day week offering a solution to people still earning as much money.
[7:26] One of the problems that we have now, and I'm not saying that this is realistically going to happen at all, just the concept.
[7:31] They're still earning as much.
[7:33] They're being more productive potentially on those four days.
[7:36] And he's also talking about the issue of robot tax.
[7:39] Now, that's very different to it actually being enforced and taking place anywhere.
[7:42] Of course it is.
[7:43] But I think there were some elements here that are, for somebody who's in the weeds of this, like me, who hears what comes out of Silicon Valley sometimes, which is so extreme and so hard to actually relate to.
[7:55] There were sort of elements that you could take in and understand conceptually.
[7:59] So just in conclusion then Gary, is the best we can say about this blueprint, the fact that Altman wants a debate, he wants it to be a starting point for the conversation.
[8:11] Just coming back to the AI laws that are in place here in the UK, do you think we are prepared for what is coming?
[8:18] Do you think that conversation has already begun?
[8:20] Nobody is prepared.
[8:21] I think that the trend over the last few years has been against regulating AI in any serious way.
[8:32] You know, when Altman and I testified in the Senate in 2023, the US Senate, everybody in the room from both parties and Altman, with the acceptance of one person who was from IBM, was really pretty supportive of regulation.
[8:45] And the zeitgeist has changed and everybody's giving what I think is a kind of bogus argument, saying you can't have innovation and regulation at the same time.
[8:54] The reality is you can.
[8:55] Sometimes regulation actually inspires innovation.
[8:58] You know, seatbelt laws or fuel economy standards and so forth.
[9:03] But there's this weird, like, talking point in Washington that a lot of people have believed.
[9:09] And because Washington has so much power over the rest of the world, it has spread, even the EU AI Act is getting undermined.
[9:15] And so you can ask questions like, well, what if there was really a huge cyber attack, you know, tomorrow?
[9:21] Maybe a bad actor figures out a way to leverage a system like Claude or something like that.
[9:26] What would we do about it?
[9:28] Like, we don't have enforcement in place.
[9:30] We don't have a lot of investigation in place.
[9:31] We have a little bit.
[9:32] Or what if misinformation changes an election sometime soon?
[9:38] Like, what are we doing about that?
[9:40] We're hardly doing anything about any of these problems.
[9:43] Or schools, right?
[9:44] We have this problem where, you know, teenagers are using ChatGPT to do all their assignments.
[9:48] They're not learning very much.
[9:50] We have nothing around that.
[9:51] No solution around that.
[9:52] So I would say we're incredibly flat footed.
[9:55] If, you know, this report is somewhat self-serving in certain ways, but it gets people to think about that, then great.
[10:01] Well, that's a positive note on which to leave that particular point because I do want to talk about robots since we have Sarah here with us.
[10:07] Your work at Oxford is putting AI into the physical world.
[10:11] So robots, drones, underwater vehicles that don't just follow instructions.
[10:17] They plan.
[10:19] They reason.
[10:20] They act independently.
[10:22] Let's show our viewers what we're talking about.
[11:10] Now we have a video of robots in action in a mine or in an underground passageway.
[11:17] No one needs to abseil in January down a wind turbine in the North Sea, which can only be a good thing.
[11:23] But how much are these machines actually deciding for themselves?
[11:28] Yeah, I would say quite a lot, but in specific and constrained tasks.
[11:34] So if we take, for example, a fleet of robots on a wind farm, for example, doing some jobs for inspection or maintenance.
[11:44] Yeah, I mean, they can have quite a good understanding of the context through perception, which is quite accurate.
[11:51] They can...
[11:52] That's called risk-aware.
[11:53] Yeah.
[11:54] And when you talk about risk-aware, what do you actually mean?
[11:58] Is the robot making decisions independently on its own?
[12:03] Well, that's a different point.
[12:04] I mean, risk-aware means that the machines try to continuously reason about the uncertainty in the mission and how to face it,
[12:16] and also the risks that the robot is confronting when acting and, you know, proactively trying to mitigate these risks.
[12:26] So if we consider, for example, a drone in a mine like Prometheus, the Prometheus drone that we designed,
[12:33] the drones continuously think about, you know, distance from obstacles.
[12:37] It thinks about whether in some areas of the mine there might be some leaks that impair vision.
[12:46] And so it will have to be more careful when it enters these rooms.
[12:50] It will reason about how much battery it needs to go back to the entry point.
[12:55] One of your projects, I think it's called Connect R, involves a robot that assembles itself to create structure in unstructured environments.
[13:06] So essentially, it's a self-building robot.
[13:09] Yeah.
[13:10] How close are we, do you think, to machines improvising like that?
[13:14] Oh, I wouldn't say that the Connect R system improvises.
[13:18] Actually, to the contrary, for a robot to be able to work in such a complex environment like a nuclear plant,
[13:27] planning is really key for its operation.
[13:31] So in the case of Connect R, for example, we have a human operator who might specify a specific structure that the operator wants to build within the nuclear plant.
[13:44] It could be, for example, a tree-like structure that connects points.
[13:49] And then, well, then there is a planner that actually reason about how to create this structure.
[13:56] And then the robotic units follow this plan and assemble themselves into the structure.
[14:03] How much more are you using AI within your robotics now than you were, say, three, four, five years ago?
[14:08] Because obviously the conversation about AI is being had a lot now.
[14:11] But doing something like you're doing, I imagine you've been using it a lot for a long time.
[14:15] Yeah, yeah, we have been using it for a long time. Of course, you know, we have seen a lot of changes throughout the years.
[14:22] But I would say that autonomy is a field at the intersection of AI and robotics that has very deep roots.
[14:31] You know, already in the early 2000s, we reached amazing milestones.
[14:38] For example, there was this system called the remote agent that was developed by NASA.
[14:47] And this agent, you know, in 1999, in fact, basically controlled the Deep Space One spacecraft for several minutes,
[15:02] completely autonomously, just really controlling key features of the spacecraft, like navigation and other things.
[15:12] And the spacecraft spacecraft was millions miles away from Earth.
[15:17] So, you know, this was an incredible milestone for AI and robotics.
[15:22] And then, of course, there was a lot of progress after that, thanks to machine learning, deep learning.
[15:28] And nowadays, the last trend is VLA's that are visual language action models.
[15:34] So, you know, we have been using AI in different incarnations throughout the years.
[15:41] And I think that perception is probably the field that has progressed.
[15:49] Well, we have a question on that.
[15:51] In fact, I think I'm going to start with this with Gary, because, Gary, I know you've done an awful lot on robotics as well.
[15:57] We've had a question from Linda in Switzerland.
[16:01] And she wants to know about world models, which I think you're touching on here.
[16:05] And she asked, what are they?
[16:06] What are these world models?
[16:07] What can they do?
[16:08] And how are they different from the chatbots and AI agents we already know?
[16:13] It's a really good question, because this is really the next giant leap, isn't it, Gary, in AI developing some common sense?
[16:19] Yeah, so I've been an advocate of bringing world models to AI for, I guess, decades.
[16:25] And it's an idea that's common in cognitive psychology, and it's actually common in robotics.
[16:30] It's the idea that your system explicitly internally represents certain things about the world.
[16:36] So, for example, that you guys are sitting in a room around a table with monitors, that you're intelligent agents.
[16:43] If you look bored, I can infer, maybe change, you know, shorten my answer or whatever.
[16:48] So I understand a lot of how things work, and I try to represent them.
[16:53] Maybe not all of them.
[16:54] So maybe some of them don't matter.
[16:55] Like, I don't really care about the colors of the background, maybe, or what have you.
[16:59] So they don't have to be complete, but they're internal to a system, explicit representations.
[17:04] There is this person, this object, et cetera.
[17:06] And roboticists take this for granted and have for a very long time.
[17:10] And it was part of the core of how AI was built in the 1950s.
[17:15] But the neural network tradition, which powers the large language models, tries to do without that, by approximating things with lots of statistics, without having those explicit models.
[17:27] And that's part of why it breaks down.
[17:29] So, for example, the hallucinations that we see are often things that you could have just looked up in Wikipedia.
[17:34] If you had a proper world model, you wouldn't make those mistakes in the first place.
[17:38] So it just turns out that the way that they work, they don't have these explicit models.
[17:42] So I think a lot of people are coming around to something I argued for a long time, which is we need to bring these traditions together.
[17:48] We have to figure out how to make the world models work in conjunction with the neural networks like large language models.
[17:54] And there are basically turning it from a chatbot or a parrot or a really smart calculator into something that operates in our world that can understand our world and the repercussions of taking a particular action.
[18:08] I'm just going to drop in a little thing, which is calculators actually do have a very limited world model in a way that chatbots don't, which is kind of a remarkable fact.
[18:17] And this is why calculators are trustworthy.
[18:19] So they have a model of multiplication, for example, and precedence of operations and so forth that they follow systematically.
[18:27] Whereas the chatbots, it's always kind of statistical guesswork.
[18:30] And that's why we can't trust them.
[18:31] Wow.
[18:32] Extraordinary.
[18:33] If you wanted to come in on that.
[18:34] Yeah, no, I was just, I wanted to.
[18:36] No pressure.
[18:37] If you could just figure it all that out, by the way.
[18:40] No, no.
[18:41] I mean, just a clarification that I think, I mean, world models have been always there in robotics, right?
[18:48] I mean, the problem is how do we obtain this world model?
[18:51] So for a long time, the model of the world was basically programmed by the engineer who was specifying, you know, the structure on the environment in which the robot was moving and the different actions that the robot can do and how these actions affect the world.
[19:13] And so, you know, the robot had this model, but because the engineer was providing a model.
[19:20] Right.
[19:21] So you're saying the leap will be when the world model can actually assess, adapt, maybe rebuild itself to live in a space that we don't know anything about, that we can't help it with.
[19:35] Yeah.
[19:36] But the ultimate goal is that the robot can form this model by itself.
[19:40] A world model's ability to be able to predict and understand consequence can make it behave a bit more like a human might as well, can't it?
[19:48] Unlike that large language model, which is working on statistical probability and patterns in words or reinforcement learning, where it's about reward or punishment for replicating something that is being seen.
[20:00] It's almost giving another dimension that could perhaps feel a little bit more.
[20:05] I know you've just mentioned the phrase common sense, but feel a little bit more like common sense.
[20:09] It's the way that wave autonomous vehicles are working.
[20:12] It allows for something that maybe takes those steps ahead and thinks, well, if this happens, that will happen in a way that is a little bit deeper than what language can provide.
[20:23] Interesting. Well, let's move to a device that you can wear on your finger and specifically a device that saved, Lara, a young woman's life at three o'clock in the morning.
[20:34] Tell us about Maeve O'Neill.
[20:36] Yes. Well, at just 19 years old, she was ill, she had Covid, she had tonsillitis.
[20:44] She'd been back and forth from the doctor. The doctor just said, you'll get better, don't worry, it'll be fine, kept sending her off home.
[20:50] She felt something was really amiss, that the problem was getting more serious.
[20:54] And the data from her smart ring showed that her readings had gone all over the place.
[20:59] Her heart rate variability, her resting heart rate, her temperature, they were all out of kilter.
[21:04] So she was convinced something was more seriously wrong, as she really did feel very bad.
[21:09] And she ended up going to A&E, or she's in the States, so the ER.
[21:13] And from there, she found out that she really was a lot more sick than she'd realised.
[21:18] You are actually part of an increasing number of people who are spotting serious issues by identifying the biomarkers moving on your ring or on your watch,
[21:29] whatever the device is you're wearing, and realising that something needs further investigation.
[21:33] So can you tell me a little bit about what happened?
[21:36] So I originally got sick, and when my pass, I usually get sick around September slash end of August.
[21:44] And it's usually like a strep throat, just like when you go back to school, all the germs.
[21:49] And so moving into college, I ended up getting a strep throat.
[21:53] I had tonsillitis.
[21:54] And the doctors had told me I don't need antibiotics, and that I'm OK to just go home, take some Advil and rest.
[22:01] And originally, I was like, OK, I'll go home and rest.
[22:06] After a few days, I noticed I wasn't getting any better.
[22:09] I went to about two more doctors, and they told me the same exact thing.
[22:13] Juan even told me I just had COVID, and just to go home.
[22:17] And after about a week and a half of feeling sick, at that point, I really knew something was off.
[22:25] After, like, I tried sleeping, and my respiratory rate, like, immediately spiked.
[22:30] My resting heart rate would not go down, and just about everything else was really out of whack.
[22:37] Like, my body temperature had raised by about five, and I knew that something was wrong with my body.
[22:43] And I couldn't even fall asleep on my back because it was too painful to even do anything like that.
[22:48] And later on, we'd found out that, like, my body was going septic, and that's why everything was going off on the Oura Ring.
[22:57] I love the story.
[22:59] I think it's relevant to everyone because I know here in the UK, one in three Brits uses an activity tracker or wearable device.
[23:08] We all want to live more healthily.
[23:11] The thing that's really different about this is that it's the AI looking at the data this wearable is providing that was the clue, the breakthrough.
[23:20] Yeah, that's right. I think that wearing an activity tracker is a lot more about patterns than it is how many steps you did yesterday.
[23:27] So, by seeing your long-term patterns and recognising what your normal is, you can see when something goes out of kilter.
[23:33] They're not medical devices. They're not for diagnosis, albeit some of them do have some medical-grade sensors in them.
[23:39] But there have now been a growing number of people who have identified serious medical conditions by seeing something is amiss with the data.
[23:48] So, a lot of people recognising they have AFib. There's been people who ended up getting cancer diagnosis, seeing that they're about to have a lupus episode.
[23:56] So many different conditions. From seeing that the pattern has changed, their data isn't right.
[24:02] They know for them something doesn't feel right and it's encouraged them to go and see a doctor and get a diagnosis that they've needed.
[24:09] I only wear one ring and it's very unhealthy if I don't wear this, although it has increased my stress levels, I can tell you that.
[24:15] Do you have a wearable, Gary? Are you an Aura fan?
[24:19] I have an Apple smartwatch.
[24:21] Okay. And are you religious in tracking your ups and downs, your sleep patterns, your steps?
[24:27] I track my sleep, my steps, my daily calories and I'm pretty religious about it.
[24:32] It's a way to motivate me to have good habits and I think that's a perfectly great use of these kinds of technologies.
[24:39] I totally agree. Yeah. What about you, Sarah?
[24:42] Well, I also have an Apple watch, but actually I'm thinking that sometimes these things can be counterproductive.
[24:51] I mean, for example, just to give you an example, I think I have a lot of friends who are telling me that since they started tracking their sleep,
[25:02] now they are so obsessed about sleeping well that they don't sleep well at all.
[25:07] Yeah. Look, there's a whole side of anxiety, health anxiety that's coming from these devices.
[25:11] And for some people, they are going to cause you to sleep less.
[25:14] They're going to cause you to worry more.
[25:16] Well, you might have some thoughts on the programme and ideas for future episodes.
[25:21] And if you do, we want to hear from you.
[25:22] You can, of course, email us at ai-decoded at bbc.co.uk
[25:26] and maybe we can incorporate that into some of our future programmes.
[25:30] That is all we have time for this week.
[25:32] Gary, thank you very much to you there in Vancouver.
[25:35] Thank you, Sarah. Thank you, Lara, as well.
[25:37] Just a quick reminder that you can watch the back episodes of Ai Decoded.
[25:42] It's all there on the YouTube playlist, or at least I hope it is, and on the BBC iPlayer.
[25:47] Do take a look at that.
[25:49] Thank you for watching. We'll see you same time next week. Goodbye.
Transcribe Any Video or Podcast — Free
Paste a URL and get a full AI-powered transcript in minutes. Try ScribeHawk →