Artificial intelligence (AI) has developed rapidly in the past decade, which has surpassed the Human Intelligence in modeling the nonlinear laws of big data samples and online accurate decision-making in interaction with the environment, and has achieved great success in the fields of computer vision, natural language processing, and robot control. The reason for the rapid development of AI is, on one hand, the breakthrough of artificial intelligence algorithms represented by deep learning and reinforcement learning; on the other hand, the rapid decline on the cost and popularization of artificial intelligence computing power represented by GPUs.
Since 5G, AI has gradually been widely used in mobile communication networks, such as network configuration optimization at the network management level, resource scheduling optimization at the network element level, and even the physical layer of the air interface. In addition, there are also more and more AI applications on the terminal side. Towards the future, 6G network needs to facilitate the digitalization and intelligentization of thousands of industries, and it needs to provide intelligent services with less latency and better performance than cloud intelligence. For operators, network operation costs need to be greatly reduced, and network operation and maintenance needs to evolve from local intelligent scenarios to high-level network autonomy.
At present, AI applications are mainly based on centralized cloud resources. Cloud servers aggregate large amounts of data, utilize centralized computing power to preprocess them, and train and validate AI models. However, transmitting a large amount of raw data in the network will not only put enormous pressure on network transmission bandwidth and performance indicators (such as latency), but also bring great challenges to data privacy protection. Besides, due to the lack of computing power, algorithms and data, there is still much room for improvement in the intelligent applications on the terminal side.
In the face of the above challenges, it is necessary to introduce native AI capabilities into the network, abandon the patched mode of AI applications, and realize the deep integration of communication connection, computing, data and AI models at the network architecture level, where the distributed computing power and data in the network are fully utilized for the coordination mechanisms between multiple nodes and between terminals and the network, and realization of the integration of distributed and centralized processing. In this way, not only data privacy can be protected, the efficiency of data processing, the timeliness of decision-making and reasoning, and utilization efficiency of network nodes can also be improved. This white paper first introduces the driving forces and application scenarios of native intelligence. The demand for native AI support by 6G network is derived from the current status of intelligent network applications, the requirements on high-level network autonomy, ubiquitous intelligence, high value network services, extreme service experience, and network safety and trustworthiness. Then, the paper elaborates on the definition and scope of native AI, and proposes the deep integration of AI computing power, data, algorithms and network connections. Besides, the new concepts of 6G native AI are introduced including AI service quality (QoAIS), orchestration of AI workflows of its full life cycle, computing and communication integration, and integration of native AI and digital twins. The new architecture driven by native AI is proposed and described in detail, including data plane, smart plane and extended control plane and user plane, and new technologies are introduced including AI model orchestration, distributed model training, distributed model inference, pre-validation and optimization of digital twins. Finally, the future research directions are prospected.