Anthropic launches new Claude 4 AI models

As compute demand is growing at massive levels amid an intensifying AI race, Anthropic is now reportedly looking into the idea of building its own AI chips. The Dario Amodei-led company is not actively building chips yet and has not formally committed to the project, but is internally evaluating whether developing proprietary silicon could make sense for its future AI systems, including the Claude family of models, reports Reuters.

The discussion inside Anthropic comes at a time when advanced AI development is facing a shortage of powerful computing resources. Training and running modern AI models requires large clusters of specialized chips like GPUs and AI accelerators, which handle trillions of operations and process huge amounts of data across many connected systems. As AI models grow and more people use them, getting enough computing power has become one of the highest costs and challenges for AI companies.

Currently, Anthropic depends on a mix of external hardware ecosystems rather than any proprietary chip architecture. The company uses Nvidia GPUs along with custom AI accelerators provided through cloud partners like Google and Amazon. These partnerships allow Anthropic to access massive compute capacity without owning physical semiconductor infrastructure, but they also create long-term dependencies on external supply chains. In periods of high demand, access to top-tier chips can become constrained, and pricing for large-scale compute contracts can significantly impact operational costs.

The potential interest in building custom chips reflects a broader trend of AI companies moving toward greater control over their own infrastructure. Leading technology companies are increasingly designing their own chips to improve performance for AI workloads. For example, Google has developed its Tensor Processing Units (TPUs) specifically for neural networks, while Amazon has introduced Trainium and Inferentia chips to reduce dependence on third-party GPUs in its cloud. Meta is also investing heavily in in-house AI accelerators for its growing compute needs. Similarly, Microsoft has unveiled its Maia 200 AI chip for cloud and AI workloads, and Elon Musk’s xAI – along with Tesla, SpaceX and Intel – is working on large-scale ‘Terafab’ infrastructure projects aimed at building massive AI compute capacity.

However, building proprietary AI chips is a highly complex and capital-intensive undertaking. Designing advanced semiconductor architectures typically requires large teams of hardware engineers, long development cycles, and close collaboration with fabrication partners like TSMC, which manufactures the world’s most advanced chips. The cost of developing a single cutting-edge AI accelerator can run into hundreds of millions of dollars, and the timeline from design to deployment often spans several years. Beyond fabrication, companies must also build supporting software ecosystems, including compilers, drivers, and machine learning frameworks optimized for the new hardware.

The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →