Before You Start
This topic describes the basic knowledge, requirements, and precautions for using the Atlas 500 to develop services.
You are advised to read this section carefully before starting the development.
Application Scenario
This document applies to inference tasks using the Atlas 500.
Key Concepts
Concept |
Description |
---|---|
Ascend 310 |
The Ascend 310 is a high-performance and low–power consumption AI chip designed for scenarios such as image recognition, video processing, inference computing, and machine learning. The chip has two built-in AI cores, supports 128-bit LPDDR4X, and provides up to 16 TOPS (Float16/INT8) computing capability. |
DDK |
The Mind Studio solution provides the Digital Development Kit (DDK) for developers. You can install the DDK to obtain the APIs, libraries, and tool chains required for development on Mind Studio. |
Graph |
Graph is a concept in the HiAI framework instead of the computational graph in the deep learning framework. In the HiAI framework, a graph describes the entire service processing flow. It is a program processing flow consisting of multiple engines. |
HiAI Engine |
HiAI Engine is a universal service flow execution engine. It consists of Agent that runs on the host and Manager that runs on the device. Each engine provides a function implemented by user code, that is, the engine processing program is implemented by users. |
Host |
The host is the OS of the Hi3559A CPU. |
Device |
The device is the OS of the Ascend 310. |
DVPP |
Digital vision pre-processing (DVPP) supports pre-processing operations such as image/video decoding and scaling. It is also capable of encoding and outputting precessed videos and images. |
AIPP |
AI pre-processing (AIPP) provides functions such as format conversion and padding/cropping, CSC (YUV2RGB or RGB2YUV), scaling, and channel data exchange. |
OMG |
Offline model generator (OMG) converts models trained by using frameworks such as Caffe and TensorFlow into offline models supported by Huawei chips. The OMG also supports model optimization functions that are performed independent from the device, such as operator scheduling optimization, weight data rearrangement and compression, and memory usage optimization. |
OME |
Offline model executor (OME) loads converted offline models for inference. |
Ctrl CPU |
One Ascend 310 chip has four Ctrl CPUs, which are used for service logic processing. |
AI CPU |
One Ascend 310 chip has four AI CPUs, which are used for operator task scheduling and implementation of some operators. |
AI Core |
One Ascend 310 chip has two AI cores, which are used for matrix computing. |
IPC |
An IP camera (IPC) provides RTSP data streams. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot