US-based AI chip startup Groq has secured $750 million in a new fundraise that values the upstart at $6.9 billion. The round was led by Disruptive—the Dallas-based growth investor known for backing major tech firms—with additional participation from BlackRock, Neuberger Berman, Deutsche Telekom Capital Partners, Samsung Electronics, Cisco Systems, Altimeter, D1 Capital Partners, 1789 Capital, Infinitum, and an unnamed West Coast mutual fund. Disruptive alone committed nearly $350 million in the round, more than doubling Groq’s valuation since its prior round in August 2024.

“As AI expands, the infrastructure behind it will be as essential as the models themselves,” Alex Davis, Founder, Chairman, and CEO of Disruptive, commented on the funding. “Groq is building that foundation, and we couldn’t be more excited to partner with Jonathan and his team in this next chapter of explosive growth.”

Groq, founded by former Google engineer Jonathan Ross, develops processors and delivers data-center capacity tailored for inference—the phase of AI where pre-trained models are run to power real-time applications. Ross had played a major role in inventing Google’s TPU (tensor processing unit) before launching Groq in Mountain View, California.

The AI chip company started with $10 million in seed funding in 2017 and survived challenging early years, eventually capitalizing on the generative AI boom in the post-ChatGPT era. The new capital will drive expansion of its global infrastructure, including several new data center sites and Groq’s first Asia-Pacific facility set to launch before the end of 2025. Groq’s computing capacity recently grew by over 10% in a single month and was immediately absorbed by client demand. Currently, Groq operates 13 facilities across the US, Canada, Europe, and the Middle East, supporting deployments such as Saudi Arabia’s Humain chat service and the rollout of OpenAI’s GPT-OSS model in the kingdom.

In the rapidly evolving AI chip landscape, Groq targets inference workloads where speed, predictability, and cost are critical. Unlike AMD and Intel, which rely on GPU and CPU architectures and focus on both training and inference, Groq’s LPU™ Inference Engine is purpose-built for low-latency, high-throughput inference. This enables Groq to outperform many competitors in scenarios demanding quick, consistent responses.

The $750 million infusion gives Groq the resources to rapidly scale operations, open new data centers, and develop next-generation products, including securing global talent and supporting institutional-scale activity. This fresh capital infusion further positions Groq to meet exploding demand, particularly in underserved regions and verticals, and bolster its role in exporting the American AI tech stack worldwide. Groq gains the runway to innovate, invest in customer and developer support, and expand geographically, potentially leveling the playing field against market leaders like Nvidia, AMD, and Intel.

The global AI chip market is surging, expected to hit $83.8 billion in 2025 and reach $459 billion by 2032, driven by data center investments and generative AI adoption. GPUs remain the default standard, claiming over 46% of the market, but specialized chips—from TPUs to custom ASICs for inference—are increasingly popular as enterprises seek lower latency and greater efficiency. Machine learning drives roughly 36% of demand, with major growth in autonomous systems, healthcare, and financial services. New players like Groq are also enabling edge AI and real-time data applications through energy-efficient silicon.

The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →