HUAWEI's built -in CPU "KIRIN 970" is an important product that shows the path to the smartphone.

HUAWEI's built -in CPU "KIRIN 970" is an important product that shows the path to the smartphone.

  • By huaweicomputers
  • 11/10/2022

And Mr. You spoke the most time, an AI -only accelerator, which the company calls NPU (Neural Network Processing Unit).

Mr. You said, "By implementing half of the NPU equivalent to the CPU diary, AI can be processed with 50 times more power than the CPU. The performance that recognizes 200 photos., It is more than 20 times compared to the CPU. "

NPUの性能CPUに比べて写真の認識が20倍高速になっている性能は25倍だが、ダイサイズはCPU部分の半分に過ぎない非常に低い消費電力で実行できる競合他社の製品との比較

This NPU described it as a "processor that performs AI processing exclusively" and did not explain specific architectures, but NPU is deeply deep because the recognition demonstration of photos is used for benchmarks.It would be appropriate to think that it is an accelerator that conducts learning.

HuaweiのAI内蔵CPU「Kirin 970」はスマホの進むべき道を示す重要な製品だ

Deep learning, a calculation method used to implement AI, is roughly divided into two functions: learning and reasoning.Learning is a work of loading data into a multi -level neural network called a human brain called DNN (Deep Neural Network) and training AI, which requires a strong general -purpose processor.Therefore, it is common to connect multiple GPUs like NVIDIA TESLA and calculate them.

In contrast, inference is the work of using the DNN trained by the learning to perform image recognition and voice recognition.It will be a work.

In recent years, it has become common to install an accelerator in the edge SoC.For example, NVIDIA is equipped with DLA (DEEP LEARNING ACCELERATOR) in the "Xavier" (Exevia), which is the next -generation SOC announced at the GTC in May, and has an accelerator that is dedicated to inference (NVIDIA).See the SOC "XAVIER" for automatic driving cars with a power consumption of 7 billion transistors and 20W power consumption).

The advantage of installing a reasoning accelerator is that it can be inferred with high performance with overwhelmingly low power consumption than the general -purpose processor, CPU/GPU.Although the accelerator can only be processed in advance, it is overwhelmingly high performance in fixed processes such as image recognition, and can be processed with low power consumption.For this reason, such inference accelerators have begun to be installed in SOCs for autonomous driving, and the trend has also come for smartphones.

NPUが実現するAIの機能の例想定されるアプリケーション

NPUs are designed to be widely used in a wide range of AI, such as task scheduling, memorial location, UI graphics processing, and image recognition.For example, AI turns off unnecessary tasks while looking at the movement of the entire terminal, adjusts memory assignments, and not only image recognition, but also AI operates the camera app, optimally white balance and exposure.You can shoot while setting the settings.

ソフトウェアのモデルHuaweiからソフトウェア開発キットなどが提供される

These AI apps will not only implement them in their own software, but also provide SDK to third parties.Software can be developed using deep learning frameworks such as TENSORFLOW/TENSORFLOW LITE and Caffe/Caffe2, which are generally used in software development that is currently applied to deep learning.