Meta launches four next-gen MTIA AI chips

Social media giant Meta has now announced plans to deploy four new generations of its in-house Meta Training and Inference Accelerator (MTIA) chips by the end of 2027. The roadmap of the chips — MTIA 300 (already in production), MTIA 400 (Iris, completing lab testing), MTIA 450 (Arke), and MTIA 500 (Astrid) – comes at a time when Meta has been pushing to reduce reliance on third-party silicon while meeting surging demand for AI workloads across its platforms.

The MTIA 300 is currently powering training for ranking and recommendation systems on Facebook and Instagram. The MTIA 400 has finished lab testing and is moving toward data-center deployment, and is described as “competitive with leading commercial products.” MTIA 450 is targeted for early 2027 mass production, with MTIA 500 following about six months later. Yee Jiun Song, Meta’s vice president of engineering for the MTIA program, described the cadence as unusually rapid: “It’s unusual for any silicon team to release a new chip every six months.”

“We deploy hundreds of thousands of MTIA chips for inference workloads across both organic content and ads on our apps. These chips are specifically designed for our workloads, and are part of a custom full-stack solution, helping us create a highly optimized system that’s tailored to our needs. This system achieves greater compute efficiency than general use chips for our intended purposes, making MTIA much more cost efficient,” the company wrote in its blog post. The chips are built on the open-source RISC-V architecture, manufactured by Taiwan Semiconductor Manufacturing Co. (TSMC), and developed in partnership with Broadcom. By designing custom silicon on RISC-V architecture and manufacturing via TSMC, Meta achieves better price-per-performance and higher power efficiency, while adapting to the quickly evolving needs of AI.

To provide some background, Meta first disclosed its MTIA program in 2023 and released the second-generation chip in 2024. The initiative is driven by the need to diversify silicon supply, lower costs, and optimize for Meta-specific workloads—particularly content ranking, recommendation algorithms, and generative AI inference. Unlike general-purpose GPUs from Nvidia or AMD, MTIA chips are tailored for internal use, allowing Meta to eliminate unnecessary features and improve price-performance. Still, the development of custom silicon remains expensive and time-intensive. Media reports indicated Meta scrapped an advanced training-focused chip codenamed Olympus due to design difficulties.

The acceleration of AI development has outpaced traditional chip timelines. Song noted: “Even in the last two or three months things have accelerated at a pace that has kind of blown everyone’s minds. Silicon programs have to keep up.” Meta’s dual strategy—building custom silicon while maintaining massive external purchases—reflects this reality. The company recently signed multibillion-dollar deals with Nvidia and AMD for tens of billions of dollars in GPU capacity and is reportedly leasing additional compute from Google. Internal development received a major boost last year when Meta acquired Rivos Inc. along with more than 400 engineers after a failed $800 million bid for South Korean startup FuriosaAI. The expanded team has enabled parallel development of multiple generations.

The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →