OpenAI is planning to debut its own AI chip, in partnership with chip manufacaturer Broadcom, as early as next year, reports FT. The deal is valued at more than $10 billion, with initial shipments expected sometime next year.
The move comes as demand for AI accelerators continues to rise and Nvidia continues to capture a lion’s share of the market, even as China is stepping up on the competition. The latest development is intended to reduce dependence on external suppliers and to secure computing capacity for future large-scale models.
Training and operating advanced AI models requires increasing amounts of processing power. Nvidia currently controls about 80% of the accelerator market, creating exposure to supply constraints and price increases. Global chip production also remains concentrated. TSMC plays a central role in advanced semiconductor manufacturing, while US–China trade restrictions have added further constraints to supply chains.
OpenAI has previously supplemented Nvidia hardware with AMD chips but is now pursuing a dedicated line of processors with Broadcom instead.
Still, the AI firm seems relatively late to the scene, given that the likes of other tech behemoths, including Google, Amazon, and Meta, have taken a similar approach by designing their own chips. These processors are intended to handle specific workloads such as inference or scaled training.
To elaborate, proprietary silicon gives firms greater leverage in pricing negotiations, introduces alternatives into a concentrated supply chain, and gradually reduces the risk of depending on a single supplier.
This comes at a time when Broadcom’s AI semiconductor revenue reached $5.2 billion in the third quarter of the year, a 63% increase year-on-year, with projections of $6.2 billion in the fourth quarter. AI now represents 57% of the company’s semiconductor revenue, and its total consolidated backlog has reached a record $110 billion.
The arrangement involves OpenAI leading chip design, Broadcom providing engineering, and Taiwan Semiconductor Manufacturing Company (TSMC) producing the chips. Mass production of the new processors, referred to internally as “XPUs,” is expected to begin in 2026. These chips will be used exclusively within OpenAI’s infrastructure and will not be sold externally.
Richard Ho, OpenAI’s Head of Hardware and a former engineer on Google’s Tensor Processing Unit project is reportedly directing the initiative. Reports state that around 40 engineers are assigned to the program.
OpenAI had previously explored building its own chip fabrication facilities but abandoned the plan due to prohibitive costs and extended timelines. Estimates placed the expense of establishing a modern foundry at more than $30 billion, with years required before production could begin.
The costs of running large-scale AI models have added further financial pressure to OpenAI’s operations. Reports indicate that the company lost around $5 billion in 2024, largely due to the high cost of procuring and running GPUs to train and deploy models such as GPT-4 and GPT-5.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →