Boosting AI-Driven Innovation in 6G with the AI-RAN Alliance, 3GPP, and O-RAN

The pace of 6G research and development is picking up as the 5G era crosses the midpoint of the decade-long cellular generation time frame. In this blog post, we highlight how NVIDIA is playing an active role in the emerging 6G field, enabling innovation and fostering collaboration in the industry. 

NVIDIA is not only delivering AI native 6G tools but is also working with partners and industry groups to accelerate innovation. As Figure 1 shows, the 6G innovation effort leverages AI-native tools. These are based on the NVIDIA Aerial platform, including NVIDIA Aerial CUDA Accelerated RAN, NVIDIA Aerial Omniverse Digital Twin, and NVIDIA Aerial AI Radio Frameworks, along with accelerated computing on GPU-based platforms. 

Likewise, NVIDIA is working with its partners and the wider telecommunications ecosystem (including the AI-RAN Alliance, 3GPP, and O-RAN) to drive AI/ML-enabled innovations that will shape the requirements and opportunities for the 6G era. These innovations are integrated into platforms, tools, and blueprints enabling 6G research and development.

Figure 1. AI and Accelerated Computing will define 6G

AI blueprints for the Radio Access Network

The radio access network (RAN) is the most computationally intensive part of the cellular network. It will be the focus of many of the tangible new features and capabilities of 6G to improve performance and enable new use cases and applications that can only be accomplished by using AI/ML natively in the RAN. 

AI/ML methodologies are proving effective in addressing the increasing complexity of the RAN. In outlining its expectations for IMT-2030 technologies (such as 6G), the International Telecommunications Union (ITU) has proposed the new 6G air interface to be AI-native and to use AI/ML to enhance the performance of radio interface functions such as symbol detection/decoding and channel estimation. 

From a standardization perspective, 3GPP is operationalizing the ITU proposal for the AI-native air interface (Figure 2). In Release 18, 3GPP conducted a first-of-its-kind study on AI/ML for a 5G new radio (NR) air interface to investigate a general framework for AI/ML as well as selected use cases including channel state information (CSI) feedback, beam management, and positioning. Release 19 will expand on these on at least three fronts. 

Firstly, the Release 19 work item on AI/ML for NR air interface includes several study objectives to address outstanding issues identified during the Release 18 study. Second, Release 19 will support one-sided AI/ML models by specifying signaling and protocol aspects related to life cycle management (LCM), where a one-sided model can either be UE-sided or network-sided. Third, in Release 19, 3GPP will conduct a dedicated study on AI/ML for mobility in NR air interface that will further consider information available at the UE side. 

Specifically, the study will investigate AI/ML-based radio resource management prediction for both the UE-sided model and network-sided model, as well as the prediction of events (such as handover failure, radio link failure, and measurement events) for the UE-sided model. 

NVIDIA contributed to the completion of the 3GPP study on AI/ML for the 5G NR air interface in Release 18. NVIDIA is now contributing to the 3GPP Release 19 work item on AI/ML for 5G NR air interface, introducing specification support for AI/ML usage in 5G-Advanced toward 6G.

Figure 2. An overview of AI in 5G-Advanced in 3GPP Release 18

The O-RAN Alliance is also conducting an AI-focused transformation towards an open and interoperable architecture that is natively intelligent. By using AI/ML-based technologies, O-RAN aims to integrate intelligence into every layer of the open RAN architecture. The introduction of the RAN intelligent controller (RIC) in the O-RAN architecture has been an important development, making it possible to introduce AI/ML-based solutions to a wide variety of use cases. 

O-RAN Alliance’s next Generation Research Group (nGRG) is driving research efforts on enabling AI-native architecture and features for next-generation open RAN. This also includes cross-domain AI between RAN and other domains of physical networks, or even beyond the realm of physical network boundaries, between physical and digital twin network domains. 

NVIDIA is leading O-RAN Alliance nGRG as one of the co-chairs, and collaborating with key industry partners in furthering open RAN-centric 6G research initiatives across five research streams:

Use Cases and Requirements

Architecture

Native & Cross-Domain AI

Security

Next Generation Research Platform.

Figure 3. A phased approach towards AI enablement at O-RAN

NVIDIA is working with industry leaders in the AI-RAN Alliance to accelerate implementations of AI-driven air interfaces. Unlike standards-setting organizations, where all the efforts are focused on developing specification documents for interoperability, AI-RAN Alliance’s focus is to create implementation blueprints and benchmark the efficacies of AI/ML algorithms for the new AI-native RAN. 

These blueprints can be used by the community to develop their own versions of algorithms to support the same or new features. The benchmarking results will be used by the community to evaluate the performances of the algorithms and the associated AI/ML frameworks.  The alliance is also chartered to define blueprints for implementing multi-tenant systems where the RAN and other workloads, such as generative AI inferencing workloads can dynamically share the same infrastructure resources to increase usage (AI-and-RAN). Moreover, it is aiming to define blueprints for implementing next-gen AI driven applications on the RAN infrastructure (AI-on-RAN), along with advancing RAN capabilities through harnessing AI/ML-powered algorithms to improve spectral efficiency (AI-for-RAN).

NVIDIA is also working directly with the developer community on creating and testing new AI/ML algorithms with the NVIDIA Aerial AI Radio Frameworks. These provide a package of AI enhancements to enable training and inference in the RAN. The framework tools—pyAerial, NVIDIA Aerial Data Lake, and NVIDIA Sionna, span the research space from AI/ML algorithm exploration to AI/ML model training and inference, providing blueprints to explore different AI/ML configurations for the RAN. 

pyAerial is a Python library of physical layer components that can be used as part of the workflow in taking a design from simulation to real-time operation. Figure 4 shows an example of its use for a neural receiver. Aerial Data Lake is a data capture platform supporting the capture of over-the-air (OTA) ‌RF data from virtual RAN (vRAN) networks built on the Aerial CUDA-Accelerated RAN. 

NVIDIA Sionna is a GPU-accelerated open-source library for link-level simulations. It enables rapid prototyping of complex communication system architectures and provides native support for the integration of machine learning in 6G signal processing. These AI Radio frameworks enable AI enhancement to the NVIDIA Aerial CUDA-Accelerated RAN, which is a framework for building commercial-grade, software-defined, and cloud-native 5G and future 6G RANs.

Figure 4. An Aerial AI Radio Frameworks example building a neural receiver with pyAerial 

Digital twin networks

As the industry designs the AI-native air interface for 6G, the need for a system-level deterministic ray-tracing-based simulator that can generate vast amounts of synthetic data for training the AI/ML models, and a high fidelity full-system simulation of city-scale network before deploying them in the physical network are the challenges that need to be resolved. A digital twin network (DTN) is a tool to address these challenges, as it provides full emulation of a physical 5G/6G network and mirrors its characteristics, behaviors, and configurations to enable developers to create the AI/ML models and test and fine-tune them in a simulated environment. 

The ITU expects a symbiotic interplay between physical and digital twin networks, to enable DTNs to efficiently and intelligently verify, simulate, deploy, and manage 6G-era networks in real-time. In its recommendation Y.3090 on “Digital Twin Network – Requirements and Architecture” it laid down foundational considerations for functional and service requirements of DTN, its security considerations, and a potential architectural blueprint. Technologies like DTN will enhance the use of the 6G system as a sensing network by providing clarity on how radio frequency traffic characteristics can be used to determine attributes such as distance, angle, and velocity of objects and the characteristics of the surrounding environment. 

In TS 22.137, 3GPP has defined service requirements for wireless sensing, under its integrated sensing and communications (ISAC) topic area, into three use case classes: object detection and tracking, environment monitoring and motion monitoring. The ISAC project explores the potential of using telecommunication infrastructure as a wireless communication and sensing network. It provides input to various industries, such as unmanned aerial vehicles (UAVs), smart homes, vehicle-to-everything (V2X), factories, railways, and public safety. This opens up new revenue streams for telecommunications companies. 

To enable the proper evaluation of ISAC techniques, ‌wireless channel modeling must provide consistency and, above all, a correct representation of the frequency, spatial, and temporal correlation across base stations, devices, and objects in the environment. Achieving this without a propagation model grounded on the underlying physics of the scattering phenomena is simply unnatural, prone to modeling error, and‌ a waste of effort for the industry. These considerations call for deterministic, physics-based modeling for wireless propagation, especially ray tracing, in a DTN. NVIDIA has been contributing to the channel modeling for ISAC in 3GPP Release 19, championing the idea of a deterministic / ray tracing-based channel model for ISAC.

At the AI-RAN Alliance, NVIDIA and its partners are exploring how to use DTN for system-wide and site-specific optimizations for the AI-for-RAN workstreams. One of the key use cases is the ability to generate synthetic data to train the AI/ML models and then deploy them in the digital twin to validate their performances before turning them on in the physical RAN system. 

At O-RAN Alliance nGRG, NVIDIA is working with other partners to develop a set of industry guidelines on digital twin RAN (DT-RAN), its enabling technologies, and its implementation blueprints. 

NVIDIA has provided a DTN tool to the developer community to help ‌accelerate 6G research and development. The NVIDIA Aerial Omniverse Digital Twin (AODT)  is a next-generation, system-level simulation platform for performing cutting-edge AI-native air interface research and development on 5G and 6G wireless systems. Applying ray-traced channels to the physical (PHY) and medium access control (MAC) layers of the RAN, AODT is a tool to benchmark system performance, generate synthetic data, and explore AI/ML-based wireless communication algorithms under real-world conditions in system-level simulations.

The modular design enables researchers and partners to replace any module with their own innovative design, and harden the system for commercial solutions, respectively.  For example, Ansys showcased the AODT integrated with Perceive EM Solver at the IMS 2024 Conference and Exhibition for their customers to explore AI/ML and virtual RAN.  The upcoming release will introduce more friendly user interfaces, advanced geospatial capabilities and scattering models, 64TRx, and more advanced AI capabilities in NVIDIA Omniverse.

Figure 5. NVIDIA Aerial Omniverse Digital Twin

Over the Air 3GPP-compliant network as an innovation sandbox

An over-the-air (OTA) development, validation, and benchmarking platform for AI/ML algorithms complements the simulated results from the DTNs for 6G development. This provides a full-stack platform for the industry to learn and benchmark new algorithms for AI/ML, publish blueprints, benchmark KPIs, specific error measurements, and air interface-specific performance indicators. This is crucial, especially for defining the innovation required for 6G, being able to benchmark and evaluate standardization, and implementation-specific AI/ML methodologies. 

Standards bodies acknowledged the need for such an innovation sandbox. In Release 17, 3GPP conducted a study on AI-enabled RAN intelligence to identify a set of high-level principles to guide the standards work, especially noting that AI algorithms and models are implementation-specific and thus are not expected to be standardized. 

Accordingly, 3GPP defined a reference functional framework for AI-enabled RAN intelligence with common terminologies such as data collection, model training, and model inference to address the three use cases—network energy saving, load balancing, and mobility optimization. 3GPP Release 18 is focused on the normative work to specify data collection enhancements and signaling support for the use cases, while Release 19 will enhance the support of more use cases using AI/ML. O-RAN’s AI/ML guiding principles also call for trained AI/ML models to be validated before deployment.

NVIDIA has developed an innovation sandbox for the developer community Aerial RAN CoLab Over-The Air (ARC-OTA), which leverages disaggregated and off-the-shelf hardware and software components. ARC-OTA is a 3GPP Release 15 compliant and OTA-operational O-RAN 7.2x split campus 5G SA 4T4R wireless full-stack, with all the network elements from RAN and 5G Core. 

The NVIDIA Aerial CUDA Accelerated RAN Layer 1 is integrated with the OpenAirInterface (OAI) Distributed Unit (DU), Centralized Unit (CU), and a 5G NR gNB, and 5G Core network elements. ARC-OTA offers full-stack programmability, with complete access to source code to onboard any experiments, with quick-turnaround validation, and benchmarking results.

ARC-OTA is a versatile reference full-stack network sandbox for developers to onboard developer plugins and extensions. Some early examples include Northeastern’s OpenRANGym that integrates O-RAN OSC RIC for dynamic network adaptability for not only KPM-based monitoring but also control through potential DRL-based xApps. 

Northeastern has also been conducting research on Distributed Applications (dApps) for Real-Time Inference and Control in O-RAN in nGRG.

Another example is Sterling’s K8 service orchestration and monitoring developer extension. Similarly, other developers can contribute blueprints to accelerate the pace of innovation in 6G.   

ARC-OTA is the physical OTA complement to the AODT, enabling a researcher to go between a fully simulated network and a physical OTA network with the same models and algorithmic optimizations for 6G research. Together with pyAerial and Sionna which provide link-level simulation, and Data Lake for providing training data, these interlinked tools deliver a compelling platform for innovative 6G research (Figure 6).

Figure 6. NVIDIA tools and platforms for 6G research

Ramping up AI/ML for 6G

The pace of AI/ML adoption in 6G will continue to intensify as the pathway to the standards becomes clearer and commercial deployment becomes closer. NVIDIA is working with its partners and the wider ecosystem to develop new tools and methodologies for addressing 6G opportunities and challenges. 

A key platform for ongoing and future collaboration is the NVIDIA 6G Developer Program, where over a thousand 6G researchers are using tools within the NVIDIA Aerial platform to experiment on AI/ML blueprints, DTNs, and innovation sandboxes. 

We invite all researchers actively working on 6G-related projects to join the NVIDIA 6G Developer Program.

Stay in the Loop

Get the daily email from CryptoNews that makes reading the news actually enjoyable. Join our mailing list to stay in the loop to stay informed, for free.

Latest stories

- Advertisement - spot_img

You might also like...