OpenAI’s massive Stargate infrastructure initiative – a planned $500 billion partnership with Oracle and SoftBank – is reportedly encountering early setbacks as disputes over leadership, ownership, and financing slow momentum. First unveiled in 2025, the initiative aims to construct large AI data centers supplying up to 10 gigawatts of compute capacity, backed by an initial funding commitment of about $100 billion. The project was designed to support future AI models at unprecedented scale, but internal friction and the challenges of funding mega-infrastructure are delaying key decisions and early development work, reports The Information.
Notably, the Stargate plan emerged amid explosive demand for computing power driven by generative AI and large language models. Training frontier systems now requires tens of thousands of advanced GPUs, high-speed networking, and immense storage capacity. Stargate was created to meet such demand by building hyperscale data campuses powered by dedicated energy infrastructure, advanced cooling systems, and high-bandwidth interconnect networks designed specifically for AI workloads.
At its core, the initiative is predicted as a new type of infrastructure stack. Rather than relying solely on traditional cloud providers, Stargate would combine purpose-built facilities, long-term chip supply agreements, and energy partnerships to ensure stable capacity. Early plans included constructing multiple data-center hubs across the United States, with potential expansion into allied regions to improve stability and reduce latency for global users.
However, now it seems like aligning the interests of multiple global partners has proven complex. The report indicates that disagreements have surfaced over governance structures, operational control, and long-term asset ownership. Importantly, questions remain about how revenue would be shared, who would operate facilities, and how financial risks would be allocated across decades-long infrastructure lifecycles. While large institutional investors and infrastructure funds have shown interest in AI-linked assets, lenders reportedly remain cautious about underwriting debt tied to rapidly evolving technology demand and uncertain long-term pricing for AI compute services.
The reported friction has prompted the Sam Altman-led firm to explore parallel strategies to secure computing capacity. Instead of relying solely on a single mega-joint venture, the company has pursued long-term capacity agreements and regional infrastructure partnerships. For example, Oracle has expanded its cloud and data-center footprint to support AI workloads, including plans for large-scale GPU deployments and expanded high-performance networking.
This comes at a time when the ChatGPT maker is said to have informed investors that it is expected to invest about $600 billion in compute infrastructure by 2030, spanning hyperscale data centers, advanced AI chips, high-speed networking, and the energy systems required to power them. And to support this expansion, the company is reportedly pursuing one of the largest private fundraises ever, seeking up to $100 billion at a valuation of $750-$830 billion, with backing expected from SoftBank, Nvidia, and other tech giants.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →