AI-RAN is moving from lab to field, showing that a software-defined approach is the only viable way to build future AI-native wireless networks.Ahead of Mobile World Congress (MWC), running March 2-5 in Barcelona, NVIDIA and Nokia announced new AI-RAN collaborations with top telecom operators across Europe, Asia and North America, powered by NVIDIA AI-RAN platforms. Industry pioneers T-Mobile U.S., SoftBank and Indosat Ooredoo Hutchison (IOH) passed implementation milestones, taking NVIDIA-powered AI-RAN outdoors and over the air.
New benchmarking results from partners like SynaXG showed that AI-RAN running on NVIDIA platforms delivers high-speed, carrier-grade performance - meaning extreme reliability - across multiple 5G spectrum bands. And over 20 AI-RAN Alliance demos built on NVIDIA platforms will be showcased at MWC, highlighting how AI is boosting 5G performance and efficiency, and unlocking new edge AI applications.
All of this represents momentum and convergence toward a common, software-defined foundation that will set the stage for secure, open and AI-native 6G systems.
AI-RAN Goes From Lab to Live Top telecom operators and partners are using NVIDIA platforms to bring AI-RAN to commercial deployment.
T-Mobile U.S. demonstrated concurrent AI and RAN processing on NVIDIA AI-RAN platform using Nokia's CUDA-accelerated RAN software. In T-Mobile's over-the-air field environment, Nokia's AirScale massive multiple-input and multiple-output (MIMO) radio in the 3.7GHz band supported commercial devices running applications like video streaming, generative AI and AI-powered video captioning, alongside 5G.
SoftBank's AITRAS live field trial achieved an industry-first, 16-layer massive MIMO using fully software-defined 5G running on NVIDIA's AI-RAN platform, marking an important technical milestone toward AI-RAN commercialization.
IOH has implemented software-defined 5G with Nokia's vRAN software on NVIDIA AI-RAN platforms, moving from proof of concept to pre-commercial field validation. This milestone was showcased at MWC through Southeast Asia's first AI-powered 5G call, where AI and network intelligence operated seamlessly to enable secure, real-time cross-border connectivity, including responsive remote control of a robotic dog over the live 5G network. This achievement demonstrates IOH's readiness to scale AI-native network capabilities and bring intelligent connectivity to communities across Indonesia.
SynaXG demonstrated fully software-defined AI-RAN using NVIDIA AI Aerial - a suite of accelerated computing platforms, software libraries and tools to build, train, simulate and deploy AI-native wireless networks - running 4G, 5G in both sub-6GHz [FR1] and millimeter wave [FR2] spectrum bands, alongside agentic AI workloads, on a single NVIDIA GH200 server. This marks the world's first implementation of AI-RAN on FR2 bands.
SynaXG's setup activated 20 component carriers with both a centralized unit (CU) and distributed unit (DU) on one platform, achieving a throughput of 36 Gbps and under 10 milliseconds latency. These breakthrough results highlight AI-RAN-based 5G performance as well as seamless orchestration between AI and RAN workloads.
Tripled Pace of AI-RAN Innovation This year's MWC will see triple the number of AI-RAN innovations over last year, with 26 out of 33 AI-RAN Alliance demos built using NVIDIA AI Aerial and a software-defined architecture.
Some of these demos include:
DeepSig is reinventing how devices speak to networks by letting AI learn a smarter signal format at both ends of the link - the communications channel that connects two devices. An AI native air interface jointly learns how to best encode and decode signals using neural techniques at the device and base station, removing pilot overheads and adapting to site specific channels. Early results on NVIDIA platforms show up to about 2x higher throughput and better spectral and energy efficiency from the same spectrum.
SUTD, NVIDIA and partners will show how robots and autonomous vehicles can distribute their thinking across the device, edge and cloud - bringing split-inferencing from concept to implementation. By deciding in real time where each AI task runs, the demos prove how AI-RAN can meet tight latency, privacy and coverage service-level agreements to scale physical AI and vision language models through the network edge.
zTouch Networks and partners built an AI-RAN orchestration blueprint showing how operators can safely share GPUs across AI and RAN workloads. By using NVIDIA Multi-Instance GPU technology, the blueprint steers resources in real time, maximizing GPU utilization and improving energy management while ensuring RAN quality of service. This is a key step for making multi-tenant AI-RAN solutions ready for commercial use, so operators can turn GPU capacity into revenue.
Northeastern University and SoftBank will demonstrate an AI switching solution for NVIDIA AI Aerial that flips in microseconds between AI and classic algorithms for channel estimation. This selects, in real time, the best possible processing solution at all times depending on conditions, improving stability and throughput while proving AI can coexist with classical approaches.
AI-RAN is emerging as a unifying architecture for future radio networks, said Alex Choi, chair of the AI-RAN Alliance. By aligning operators, vendors and researchers around software-defined, GPU-accelerated architectures, we are boosting innovation, validating new concepts quickly and building the foundation for AI-native 6G, now.
As intelligence moves into the physical world, autonomous systems such as robots and cars depend on AI-RAN networks to see, sense, reason and act.
Capgemini is working within Project ULTIMO, a Horizon Europe-funded initiative, to show how AI-RAN can support larg










