At GTC 2026, Nvidia launched the Vera CPU, a processor built specifically to meet the growing demands of agentic AI. Vera is specifically engineered for systems that can autonomously perceive, reason, and act. Its architecture is optimized for highly parallel, data-intensive AI operations, featuring 88 custom ‘Olympus’ cores, each capable of multithreading. The design allows Vera to process AI workloads faster than ever, handling the simultaneous simulation of multiple agent behaviours.
“Vera is arriving at a turning point for AI. As intelligence becomes agentic — capable of reasoning and acting — the importance of the systems orchestrating that work is elevated. The CPU is no longer simply supporting the model; it’s driving it. With breakthrough performance and energy efficiency, Vera unlocks AI systems that think faster and scale further,” Jensen Huang (CEO, Nvidia) noted.
Vera is built on an Arm-compatible CPU architecture, enhanced with Nvidia’s own optimizations for AI performance and energy efficiency. According to the firm, this delivers up to twice the efficiency and 50% faster processing for AI workloads compared with traditional x86 server CPUs. By co-designing CPU cores with GPU capabilities, the company aims to create an AI infrastructure where both training and autonomous execution can scale efficiently without over-relying on GPU resources alone.
While memory speed has been a major limitation for AI systems, the company claims that it solves this with LPDDR5X memory, which provides over 1.2 terabytes per second of data transfer. Combined with NVLink‑C2C connections, capable of 1.8 terabytes per second between the CPU and GPU, Vera works closely with Nvidia GPUs. In practical terms, Vera targets workloads where AI must act with minimal human supervision. This includes reinforcement learning, where environments are simulated at high speed, as well as autonomous systems in robotics, logistics, and decision-making AI.
However, the Vera CPU is not a standalone product but part of Nvidia’s broader Vera Rubin AI platform, which integrates Rubin GPUs, high-speed networking, and storage accelerators. The platform is designed for hyperscale deployments in data centers, enabling organizations to run large-scale AI agents and simulations more effectively.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →