NVIDIA and Google Cloud have collaborated for more than a decade, co engineering a full stack AI platform that spans every technology layer - from performance optimized libraries and frameworks to enterprise grade cloud services. This foundation enables developers, startups and enterprises to push agentic and physical AI out of the lab and into production - from agents that manage complex workflows to robots and digital twins on the factory floor.
At Google Cloud Next this week in Las Vegas, the partnership reaches a new milestone, with advancements to expand Google Cloud AI Hypercomputer for AI factories that will power the next frontier of agentic and physical AI.
These include the new NVIDIA Vera Rubin-powered A5X bare-metal instances; a preview of Google Gemini on Google Distributed Cloud running on NVIDIA Blackwell and NVIDIA Blackwell Ultra GPUs; confidential VMs with NVIDIA Blackwell GPUs; and agentic AI on Gemini Enterprise Agent Platform with NVIDIA Nemotron open models and the NVIDIA NeMo framework.
Next-Generation Infrastructure: From NVIDIA Blackwell to Vera Rubin At Google Cloud Next, Google announced A5X powered by NVIDIA Vera Rubin NVL72 rack-scale systems, which - through extreme codesign across chips, systems and software - deliver up to 10x lower inference cost per token and 10x higher token throughput per megawatt than the prior generation.
A5X will use NVIDIA ConnectX-9 SuperNICs, combined with next-generation Google Virgo networking, scaling to up to 80,000 NVIDIA Rubin GPUs within a single site cluster and up to 960,000 NVIDIA Rubin GPUs in a multisite cluster, enabling customers to run their largest AI workloads on NVIDIA optimized infrastructure.
At Google Cloud, we believe the next decade of AI will be shaped by customers' ability to run their most demanding workloads on a truly integrated, AI optimized infrastructure stack, said Mark Lohmeyer, vice president and general manager of AI and computing infrastructure at Google Cloud. By combining Google Cloud's scalable infrastructure and managed AI services with NVIDIA's industry leading platforms, systems and software, we're giving customers flexibility to train, tune and serve everything from frontier and open models to agentic and physical AI workloads - while optimizing for performance, cost and sustainability.
Google Cloud's broad NVIDIA Blackwell portfolio ranges from A4 VMs with NVIDIA HGX B200 systems to rack-scale A4X VMs with NVIDIA GB200 NVL72 and A4X Max NVIDIA GB300 NVL72 systems, all the way to fractional G4 VMs with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.
Customers can right-size their acceleration capabilities, whether using multiple interconnected NVL72 racks that scale out to tens of thousands of NVIDIA Blackwell GPUs, a single rack that can scale up to 72 Blackwell GPUs with fifth-generation NVIDIA NVLink and NVLink 5 Switch, or just one-eighth of a GPU.
This comprehensive platform helps teams optimize every workload, from mixture-of-experts reasoning, multimodal inference and data processing to complex simulations for the next frontier of physical AI and robotics.
Leading frontier AI labs are already putting this infrastructure to work. Thinking Machines Lab is scaling its Tinker application programming interface (API) on A4X Max VMs with GB300 NVL72 systems to accelerate training, while OpenAI is running large scale inference on NVIDIA GB300 (A4X Max VMs) and GB200 NVL72 systems (A4X VMs) on Google Cloud for some of its most demanding inference workloads, including for ChatGPT.
Secure AI Wherever It Needs to Run: Sovereign and Confidential Google Gemini models running on NVIDIA Blackwell and Blackwell Ultra GPUs are now in preview on Google Distributed Cloud, so customers can bring Google's frontier models wherever their most sensitive data resides.
NVIDIA Confidential Computing with the NVIDIA Blackwell platform enables Gemini models to run in a protected environment where prompts and fine tuning data stay encrypted and can't be seen or altered by unauthorized parties, including the infrastructure operators.
In the public cloud, the preview of Confidential G4 VMs with NVIDIA RTX PRO 6000 Blackwell GPUs brings these protections to multi tenant environments - helping safeguard prompts, AI models and data so customers in regulated industries can access the power of AI without compromising on security or performance.
This is the first confidential computing offering of NVIDIA Blackwell GPUs in the cloud, giving Google Cloud customers a new foundation for secure, high performance AI.
Open Models and APIs for Agentic AI The NVIDIA platform on Google Cloud is optimized to run every kind of model - from Google's frontier Gemini and Gemma families to NVIDIA Nemotron open models and the broader open weight ecosystem - equipping developers to build agentic AI systems that reason, plan and act.
NVIDIA Nemotron 3 Super is available on Gemini Enterprise Agent Platform, giving developers a direct path to discovering, customizing and deploying NVIDIA optimized reasoning and multimodal models for agentic workflows.
Google Cloud and NVIDIA are also making it easier to train and customize open models at scale. Managed Training Clusters on Gemini Enterprise Agent Platform introduced a new managed reinforcement learning (RL) API built with NVIDIA NeMo RL for accelerating RL training at scale while automating cluster sizing, failure recovery and job execution, so teams can focus on agent behavior and model quality instead of infrastructure management.
Cybersecurity leader CrowdStrike uses NVIDIA NeMo open libraries such as NeMo Data Designer, NeMo Automodel and NeMo Megatron Bridge to generate synthetic data and fine-tuning Nemotron and other open large language models for domain-specific cybersecurity. Running on Managed Training Clusters o










