Meta is eyeing a major expansion of its AI capabilities, reportedly negotiating a multi-billion-dollar deal with Google for TPUs – the company’s high-performance AI chips. The plan could see the social media behemoth running TPUs in its own data centers from 2027, with the added option to rent TPU power from Google Cloud before then, reports The Information. For a company racing to advance AI across its platforms, this could provide a flexible, cost-effective alternative to its current reliance on Nvidia GPUs.
Until now, Google’s TPUs have primarily been used internally or provided via its cloud services. Selling them directly to a major tech player like Meta would signal a transformation of Google’s TPU business model into a more traditional hardware supplier role, expanding its reach beyond the cloud. The move comes as earlier this year, reports emerged suggesting that Google is looking to broaden its AI chip development beyond its longtime partner Broadcom. The Alphabet-owned search giant is said to be teaming up with Taiwan’s semiconductor powerhouse MediaTek to build the next generation of its Tensor Processing Units (TPUs) next year.
For Meta, the move is about more than just hardware. By adding TPUs to its compute toolkit, the company can diversify its AI infrastructure, reducing dependence on Nvidia and gaining more control over costs and performance. Recently, the Mark Zuckerberg-led company revealed plans to invest $600 billion in the United States. The multi-year initiative, set to run through 2028, will focus primarily on building and expanding AI data centers. At the same time, the company is grappling with significant financial pressures and undergoing internal restructuring as it aggressively pushes deeper into the AI space.
For Google, landing Meta as a customer would be a major validation of its TPU architecture, in which the company reportedly invested between $6 billion and $9 billion just last year. Google claims its TPUs can handle AI and machine learning workloads 15 to 30 times faster than traditional CPUs from Intel or AMD, and even outperform GPUs from NVIDIA or AMD. On top of that, TPUs are far more energy-efficient, delivering 30 to 80 times more work per unit of power compared with standard CPUs and GPUs.
But the partnership is not without its challenges. Deploying TPUs at Meta’s scale is no small feat, especially given the deeply established role of Nvidia’s CUDA ecosystem in AI software development. The long-term nature of the deal adds another layer of risk. By 2027, AI workloads and hardware designs could evolve in ways that impact both performance and cost efficiency. The stakes are high, as the global AI chip market, valued at around $73.3 billion in 2024, is projected to surge to nearly $928 billion by 2034.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →