Offline Model Inference
The Ascend 310 chip is capable of accelerating inference under the Caffe and TensorFlow frameworks. After model training is complete, you need to convert the trained model to the model file (.om file) supported by Ascend 310, compile service code, and call APIs provided by the Matrix framework to implement service functions.
The Matrix framework encapsulates the AI pre-processing (AIPP) and model inference functions into a module. After the inference API is called, the Matrix framework calls the AIPP to pre-process input images, inputs the pre-processed images into the model inference module, and returns the inference result.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot