With India hosting the AI Impact Summit 2026, major developments in the country’s artificial intelligence ecosystem have already started to emerge. India’s Tata Consultancy Services (TCS), world’s second largest IT services company announced an expanded strategic partnership with AMD, aimed at building large-scale AI computing infrastructure across the country. Along with scaling infrastructure, the initiative also reflects a strategic effort to build a credible alternative to Nvidia’s dominant AI hardware ecosystem as compute demand intensifies.

The collaboration focuses on deploying rack-scale AI systems powered by AMD’s Helios platform architecture. This infrastructure blueprint integrates high-performance AI accelerators, next-generation server CPUs, high-speed networking, and open-source software tools into a single scalable system designed for training large language models, running inference workloads, and processing real-time analytics. TCS will lead the design, integration, and operational deployment of these systems, while AMD will supply the core compute architecture, including Instinct AI accelerators and EPYC processors optimized for data-intensive workloads.

The infrastructure rollout will be executed through TCS’s data-centre arm, HyperVault AI Data Center Ltd., which aims to build AI-ready facilities capable of supporting hyperscale computing demand. Initial plans outline up to 200 megawatts of AI-ready data centre capacity, positioning the project among the largest AI compute deployments planned in India.

“This collaboration lays the foundation for AMD’s first ‘Helios’ powered AI infrastructure in India. By combining our strengths in AI, connectivity, sustainable power, and advanced data center engineering, we are poised to deliver state-of-the-art infrastructure solutions for AI companies and global enterprises. We are thrilled to deepen our longstanding partnership with AMD as we expand our participation in the AI ecosystem – Infrastructure to Intelligence,” K. Krithivasan (CEO, TCS) noted.

Technically, the Helios rack-scale design represents a shift from traditional server deployments to tightly integrated AI supercomputing clusters. These systems combine thousands of GPUs connected via ultra-low-latency interconnects, high-bandwidth memory subsystems, and software frameworks optimized for distributed training. AMD’s ROCm open software ecosystem is intended to provide a flexible alternative to proprietary GPU programming environments, enabling enterprises and research institutions to develop AI applications without being locked into a single vendor stack.

A major strategic goal behind the collaboration is to reduce dependence on a single AI hardware supplier. Notably, Nvidia currently dominates the global AI chip market, with its GPUs powering the majority of AI training clusters worldwide. However, rising costs, supply constraints, and demand for open ecosystems have encouraged enterprises and governments to explore alternatives. According to the estimates, AMD’s AI hardware share is gradually increasing, with some deployments shifting 20-25% of workloads toward AMD-based systems in recent enterprise projects.

The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →