Social media giant Meta has now announced plans to deploy four new generations of its in-house Meta Training and Inference Accelerator (MTIA) chips by the end of 2027. The roadmap of the chips â MTIA 300 (already in production), MTIA 400 (Iris, completing lab testing), MTIA 450 (Arke), and MTIA 500 (Astrid) – comes at a time when Meta has been pushing to reduce reliance on third-party silicon while meeting surging demand for AI workloads across its platforms.
The MTIA 300 is currently powering training for ranking and recommendation systems on Facebook and Instagram. The MTIA 400 has finished lab testing and is moving toward data-center deployment, and is described as âcompetitive with leading commercial products.â MTIA 450 is targeted for early 2027 mass production, with MTIA 500 following about six months later. Yee Jiun Song, Metaâs vice president of engineering for the MTIA program, described the cadence as unusually rapid: âItâs unusual for any silicon team to release a new chip every six months.â
“We deploy hundreds of thousands of MTIA chips for inference workloads across both organic content and ads on our apps. These chips are specifically designed for our workloads, and are part of a custom full-stack solution, helping us create a highly optimized system thatâs tailored to our needs. This system achieves greater compute efficiency than general use chips for our intended purposes, making MTIA much more cost efficient,” the company wrote in its blog post. The chips are built on the open-source RISC-V architecture, manufactured by Taiwan Semiconductor Manufacturing Co. (TSMC), and developed in partnership with Broadcom. By designing custom silicon on RISC-V architecture and manufacturing via TSMC, Meta achieves better price-per-performance and higher power efficiency, while adapting to the quickly evolving needs of AI.
To provide some background, Meta first disclosed its MTIA program in 2023 and released the second-generation chip in 2024. The initiative is driven by the need to diversify silicon supply, lower costs, and optimize for Meta-specific workloadsâparticularly content ranking, recommendation algorithms, and generative AI inference. Unlike general-purpose GPUs from Nvidia or AMD, MTIA chips are tailored for internal use, allowing Meta to eliminate unnecessary features and improve price-performance. Still, the development of custom silicon remains expensive and time-intensive. Media reports indicated Meta scrapped an advanced training-focused chip codenamed Olympus due to design difficulties.
The acceleration of AI development has outpaced traditional chip timelines. Song noted: âEven in the last two or three months things have accelerated at a pace that has kind of blown everyoneâs minds. Silicon programs have to keep up.â Metaâs dual strategyâbuilding custom silicon while maintaining massive external purchasesâreflects this reality. The company recently signed multibillion-dollar deals with Nvidia and AMD for tens of billions of dollars in GPU capacity and is reportedly leasing additional compute from Google. Internal development received a major boost last year when Meta acquired Rivos Inc. along with more than 400 engineers after a failed $800 million bid for South Korean startup FuriosaAI. The expanded team has enabled parallel development of multiple generations.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure â