About this transcript: This is a full AI-generated transcript of Arm Releases First Ever AI Chip, With Meta As Initial Customer, published March 30, 2026. The transcript contains 2,954 words with timestamps and was generated using Whisper AI.
"Chips with designs by Arm are inside nearly every smartphone, Apple computer, Nvidia server rack, Amazon, Google, and Microsoft data center. Now, Arm's going to compete with most of these mega customers too, joining their ranks as a fabulous chip company because it's now in a whole new business,..."
[0:00] Chips with designs by Arm are inside nearly every smartphone, Apple computer,
[0:06] Nvidia server rack, Amazon, Google, and Microsoft data center. Now, Arm's going to
[0:11] compete with most of these mega customers too, joining their ranks as a
[0:15] fabulous chip company because it's now in a whole new business, making physical
[0:19] silicon. It was really designed for the AGI era and that it's been incredibly
[0:24] low power, it's incredibly high performance, and it's sort of ruthlessly
[0:28] optimized. This is Arm's new in-house central processing unit, Arm AGI, built
[0:34] specifically for power efficient inference with Meta as its initial
[0:38] customer. It's just not an option in the world for us, right? It's all about
[0:43] supply chain diversity. For the first out chip to be embraced by Meta, I'll admit
[0:47] I'm a little bit surprised. It's very tough to get the first chip right. We're
[0:52] here in Austin, Texas for an exclusive first look at this brand new chip lab
[0:56] where Arm's making it happen. We literally have a bunch of servers
[0:59] in there and people all around the world are logged into it and working on
[1:03] optimizing the silicon and getting it ready for our customers like Meta,
[1:07] customers like OpenAI, F5 Networks, SAP. For more than 35 years, the UK-based
[1:12] chip design firm has been selling its architecture, or instruction sets, to
[1:16] almost all the largest chip makers and collecting royalties on every Arm-based
[1:21] chip too. It offers a lower power alternative to the traditional x86 PC
[1:26] and server architecture used by CPU giants Intel and AMD. It's been a
[1:29] long time since we've been able to build a chip like this. We really don't
[1:31] even know how we're going to power data centers that we're building in 2028 and
[1:35] 2029. And Arm is really good at low power at a specific performance level.
[1:42] But what do Arm's top customers think of Arm becoming a competitor in the race
[1:46] to build the best AI chips? We talked to top leaders at Meta and Arm to find out,
[1:51] and asked why CPUs are having a renaissance as agentic AI takes off, and how the new
[1:56] Arm can succeed in such a crowded field of AI chip makers today.
[1:59] Before this new chip, Arm didn't make anything tangible. Rather, selling
[2:13] licenses to its core designs as an alternative to traditional x86 architecture,
[2:18] the instruction set that's dominated since Intel introduced it nearly 50 years ago.
[2:22] x86 has been tried and true. It can run pretty much anything. What Arm has
[2:29] brought to the table with the royalties and licenses is you can just create the chip that
[2:35] you want with nothing else.
[2:39] Basically, from 12 founders working out of a turkey barn in Cambridge in 1990 to becoming
[2:43] the lead chip architecture in almost every mobile phone by the early 2000s, thanks to
[2:48] early deals with Apple and Texas Instruments.
[2:51] Then, Arm broke into architecture for data center chips with the launch of its Neoverse
[2:55] platform in 2018. Amazon took it mainstream that same year when it launched its first
[3:00] custom processor, Graviton, based on Arm Neoverse.
[3:04] Muhammad Awad joined Arm in 2018, too.
[3:06] It's been over 1.25 billion chips.
[3:09] Chips shipped into the data center based on Neoverse since we launched.
[3:13] Neoverse gained traction as more and more hyperscalers got into custom silicon.
[3:18] By the time Arm went public for the second time in 2023, it was the largest public offering
[3:22] in nearly two years.
[3:24] Its instruction sets, which are more customizable than x86, allow fabless chip makers like Amazon,
[3:30] Google, and Microsoft to tailor their designs, integrating other Arm technologies like interconnect,
[3:35] memory support, and compute subsystems.
[3:38] If Arm didn't exist,
[3:39] then all of those companies that have their own processors wouldn't be able to create
[3:45] their own.
[3:46] Arm's long held the building blocks to make a CPU.
[3:49] So why choose now for its next big move?
[3:52] In the agentic world, what we're seeing is that, you know, you've got all these GPUs
[3:56] which are now firing off agents and all those agents need to go off and execute.
[4:02] And it's the CPU that does the execution.
[4:04] Before it was people.
[4:06] People were the bottleneck in that process.
[4:07] But now with AI spawning all these agents.
[4:10] That bottleneck has been removed.
[4:12] And so the demand for CPUs to keep up with all that is really going through the roof.
[4:16] Arm has been hinting at this for perhaps the last four earnings calls.
[4:20] Over a year, this lab has been in the works for 18 plus months.
[4:24] Why is now the right time to launch?
[4:26] One of the things that we always said was we were going to launch when we had the silicon
[4:29] back.
[4:30] We knew it was working well.
[4:31] We knew that we had customers, strong customer demand.
[4:33] You know, you think about all those tick boxes and we're there.
[4:37] Congratulations on launching Arm's first data center chip.
[4:38] We're so proud of you.
[4:39] Thank you.
[4:40] Thank you.
[4:41] Congratulations to Arm.
[4:42] Congratulations.
[4:43] Congratulations.
[4:44] Indeed, dozens of customers who license Arm's architecture to build their chips have publicly
[4:48] endorsed Arm's break-in to compete with them in physical silicon.
[4:51] It's a trillion dollar market.
[4:53] And what we're seeing over and over again is actually our partners coming out and understanding
[4:57] and realizing this is actually great for the industry.
[5:01] Customers like AWS and Google and Microsoft and Nvidia have all come out and said, hey,
[5:06] this is a good thing for the ecosystem.
[5:08] At least eight customers.
[5:10] Have already committed to purchasing Arm's new CPU and about 50 of Arm's existing customers
[5:15] signaled support for the chip at the event where it was announced.
[5:18] Big names like Nvidia, AWS, Google, Samsung, Broadcom, Microsoft, Micron and TSMC.
[5:26] A lot of your customers have been supportive.
[5:29] Not everybody.
[5:30] Where was Qualcomm today?
[5:31] I think that's a question for Qualcomm.
[5:33] You know, you know, they've been a strong partner of ours in the past.
[5:37] I look forward to having a strong relationship with them in the future.
[5:40] Qualcomm bases its flagship Snapdragon PC and smartphone processors on Arm.
[5:45] But it's working on an Arm-based server CPU that will compete with this one.
[5:49] And the two have traded lawsuits since 2022 over the right to make certain chips on Arm
[5:54] technology.
[5:55] How about AMD?
[5:56] How do you anticipate that reaction?
[5:57] AMD is another great partner of ours.
[5:59] If you look at kind of some of the stuff that they're doing on the FPGA side, Xilinx and
[6:03] some of the embedded platforms, you know, those are all areas where we are very deeply
[6:07] engaged with AMD.
[6:08] And so I look to continue those strong partnerships.
[6:10] I neither want to minimize nor maximize their potential entry into the merchant silicon
[6:17] market.
[6:18] I mean, we are going to run as fast as we possibly can to meet our customer requirements.
[6:24] And competition overall is a good thing.
[6:26] It keeps us on our toes and keeps us running very fast.
[6:28] AMD is second only to Intel in server CPU market share.
[6:32] The inventor of x86, Intel also runs a foundry for manufacturing chips.
[6:37] We obviously compete with Intel in some areas.
[6:39] But we partner with them in a bunch of areas as well.
[6:41] If you look at their IPU products, these are their networking products that go into managing
[6:45] the data center.
[6:46] They're all based on Arm.
[6:47] They're a licensee of ours.
[6:49] You know, we've worked closely in the past and continue to work closely with Intel foundry
[6:53] and looking for ways to partner with them in the future.
[6:56] Like nearly all fabless AI chip makers, Arm currently manufactures its CPU at Taiwan Semiconductor
[7:02] Manufacturing Company.
[7:04] Made on TSMC's 3 nanometer node, Arm's CPU is entirely fabbed in Taiwan for now.
[7:09] Although TSMC does have a chip manufacturer in Taiwan.
[7:10] We have a 3 nanometer fab coming to Arizona soon.
[7:13] Do you have plans to bring it to Arizona when the 3 nanometer fab opens?
[7:18] You know, we would love to manufacture here.
[7:20] It really comes down to, you know, what our customers are looking for.
[7:23] Once the chips come back from TSMC, Arm needs to test early iterations.
[7:28] So Arm built this in under two years.
[7:30] An entirely new $71 million lab with three separate rooms and a ballooning team in Austin,
[7:36] Texas.
[7:37] You can see we're in an active construction site.
[7:39] When we started in Austin, it was well over $20 million.
[7:41] And we were probably, you know, 10 to 15 people.
[7:44] Today we're over a thousand, growing rapidly.
[7:47] The initial bring up for the chips happens at this first room in the Austin lab.
[7:51] This is a validation board.
[7:52] This is the board they get plugged into to kind of allows us to basically pull out all
[7:55] the different signals from the chip and just ensure that everything is operational as we
[8:00] would expect.
[8:01] Next, the chips are put into server racks next door for more testing.
[8:05] Underneath these heat sinks, you have two different Arm AGI CPUs, but they are connected
[8:11] and acting as a single CPU.
[8:13] Hundreds of cores that are acting together, hundreds of cores that can access all the
[8:15] memory on the system.
[8:17] This is great for things like database workloads.
[8:20] Then they'll come here to a third and final stage of testing, eventually.
[8:23] This is going to be a full set of automated tester equipment so that we can do things
[8:27] like failure analysis, so that we can do pre-screening of parts before they go out.
[8:31] We're literally building the runway as we're taking off in some cases.
[8:34] Finally, up to 64 of the new CPUs, that's a total of 8,700 cores, are placed in server trays, and they're all connected to the CPU.
[8:36] And that's what we're looking for.
[8:37] And that's what we're looking for.
[8:38] That's what we're looking for.
[8:39] And that's what we're looking for.
[8:40] That's what we're looking for.
[8:41] These are cores that slide into a full CPU-only air-cooled rack, something Arm tests in a
[8:45] small mock data center on site.
[8:48] It's a dense configuration Arm is betting will appeal to data center customers around the world.
[8:52] You can get two times the performance per watt than you can from an x86 rack.
[8:57] So that means twice as much performance in the same footprint, in the same power.
[9:01] How much of a game-changer is that when there's a lot of power constraints facing data centers?
[9:05] It's huge.
[9:06] So if you think about a one gigawatt data center, there's something like 10 million CPU cores
[9:10] today that are going to be on that.
[9:11] Yeah.
[9:11] So if we're going to be in that with agentic, we think that's going to grow to 100 million
[9:14] or more.
[9:15] That's going to represent an incredible amount of power.
[9:17] Awad says the power efficiency largely comes from Arm's decision to ruthlessly optimize
[9:22] the AGI chip specifically for artificial general intelligence, hence the name.
[9:27] It's not worried about supporting a lot of those legacy applications and that legacy
[9:30] software that in some ways becomes a burden for some of the existing designs because it
[9:35] creates overhead in terms of silicon, it creates overhead in terms of power, it creates overhead
[9:39] in terms of performance.
[9:41] In AI's world, wattage is like liquid gold.
[9:45] So you can imagine a world where if you have a best-in-class CPU that's giving you the
[9:51] best performance per watt that you possibly can, that opens up more wattage for other
[9:56] parts of your infrastructure, right?
[9:58] Memory is another necessity in short supply for AI chip makers, now including Arm.
[10:02] We had a pretty good understanding of the timeframe at which it was going to go to production.
[10:07] So we've been working to secure those supply chains for quite some time now.
[10:10] Waifer capacity is another necessity.
[10:12] It's another industry-wide crunch.
[10:14] Concerns about getting enough 3 nanometer capacity, TSMC's most advanced nodes are quite
[10:18] in demand.
[10:19] Yeah, absolutely.
[10:20] Demand is a challenge.
[10:22] But because of our relationships, because of that forward-forward planning that we've
[10:24] worked through with them, we're feeling pretty good about where we're at on that.
[10:31] Unlike other tech giants like OpenAI, who've quietly pulled back on data center plans,
[10:36] Meta remains all in on its AI infrastructure spending spree, making it a logical choice
[10:41] as the first customer for Arm's CPU.
[10:42] It's really a way for us to have a better future.
[10:43] It's really a way for us to have a better future.
[10:44] It's really a way for us to have a wide, diverse set of options to purchase from.
[10:50] It's like in today's world, you really only have a couple of players.
[10:54] And so this adds yet another player to the ecosystem for us.
[10:57] Paul Saab has been with Meta for 18 years, and he's one of five engineers making sure
[11:02] every piece of Meta software can run on Arm.
[11:05] It will power any workload that we throw at it.
[11:09] It was meant to basically be a full replacement, drop-in replacement for compute CPUs.
[11:11] And it's a great way to get the best out of your system.
[11:12] It's a great way to get the best out of your system.
[11:13] It's a great way to get the best out of your system.
[11:14] It's a great way to get the best out of your system.
[11:15] And be transparent to our developers.
[11:18] Saab was also around when Facebook launched the Open Compute Project in 2011, now with
[11:23] hundreds of member companies like Arm and Nvidia committed to open hardware designs
[11:28] that help reduce data center energy consumption and costs.
[11:31] The first conversations we had with Arm were, hey, if we build this, we don't want to keep
[11:37] this only within the company.
[11:39] We're not like a chip company that's trying to build sales channels to sell chips.
[11:43] We wanted it to be available to the whole world.
[11:45] Meta has a total of 30 planned or operational data centers, from its Hyperion site with up
[11:50] to five gigawatts of capacity in Louisiana to one gigawatt sites under construction in
[11:55] Ohio and Indiana.
[11:57] Meta is also reportedly looking to lease space at the giant Stargate site in Texas, where
[12:01] OpenAI and Oracle scrapped plans to expand up to 10 gigawatts of capacity.
[12:06] Meta got into a little bit of a bind with their Lama models.
[12:09] Their last Lama models that came out in operation, they didn't work as well.
[12:14] So they got behind.
[12:16] And they also recognized we don't have enough compute power to do what we need to do.
[12:23] And they increased capex, they built new data centers, they went to third parties and got
[12:30] capacity from them.
[12:33] And to fill all these data centers, you need a lot of chips.
[12:35] Meta's filling those data centers with GPUs from both Nvidia and AMD, with huge new deals
[12:40] announced in February.
[12:42] And with its own line of Meta training and inference accelerators in the future.
[12:43] Meta is also looking to build a new data center, which will be built in the future.
[12:44] Meta is filling those data centers with GPUs from both Nvidia and AMD, with huge new deals
[12:45] announced in February.
[12:46] For that, it's been making since 2023, unveiling four new MTIA chips in March.
[12:50] CPUs to power all those accelerators will come from Nvidia and now Arm.
[12:54] In a way they helped co-develop this processor.
[12:59] Although financials of the deal weren't disclosed, Meta announced in January that it plans to
[13:04] spend between $115 and $135 billion on AI in 2026.
[13:09] For Arm, there's big opportunity in partnering with the social media giant that makes about
[13:13] 50 times more in annual revenue than it did last year.
[13:15] it does. I also think that the company is looking for higher revenue
[13:20] numbers. Let's say they get 5% of Meta's $115 to $135 billion CapEx going into the future.
[13:29] That is a game changer on the on the top line for them.
[13:33] Awad says ARM's CPU isn't currently export controlled, so it can also be sold in China, which
[13:38] made up about 19% of ARM's revenue in 2025, a percentage that's been steadily declining since
[13:44] 2023. As for which U.S. customers might buy the ARM CPU next, NVIDIA is especially
[13:49] interesting. NVIDIA tried to buy ARM for $40 billion, but was shut down by regulators in
[13:54] 2022. NVIDIA has since sold off the rest of its stake in ARM.
[13:59] How is that relationship now?
[14:01] Our relationship is great. They're one of our strongest partners.
[14:03] We partner with them on all aspects of their designs.
[14:05] On our side, we look at our ARM AGI CPU as something that may exist alongside some of their
[14:11] products in a data center.
[14:13] We don't see any reason why it can't.
[14:14] I think Jensen knows he needs more CPUs to sell more GPUs, so he's very pragmatic.
[14:20] ARM wouldn't disclose pricing for the CPU, but called it competitive, saying it aims to help
[14:24] companies get access to compute even if they can't afford to make their own in-house
[14:28] processors. Moorhead predicts it'll be in the thousands of dollars.
[14:32] You know, there's a lot of ways to get ARM data center chips today, but you have to have 1,000
[14:37] engineers, $500 million budget to to go.
[14:43] And.
[14:44] Create it. So there's definitely a market need.
[14:48] Is there a proving point that that we should wait to see to say, all right, ARM did this, it was
[14:52] successful, the yields were good.
[14:54] We've got AI workloads running at scale on ARM made CPUs.
[14:59] Well, we've got silicon back now.
[15:01] It's running real workloads and we've got silicon in customers hands in the labs.
[15:06] They are actively testing it, verifying it, qualifying it to go to production.
[15:11] And we expect to be in production later this year.
Transcribe Any Video or Podcast — Free
Paste a URL and get a full AI-powered transcript in minutes. Try ScribeHawk →