Processing the Input and Output Data of Algorithm Inference
To avoid memory copy during algorithm inference, you are advised to use the HIAI_DMalloc API to allocate memory for the input and output data when calling the process API of the model manager. In this way, zero-copy can be achieved and the processing time is optimized. If DVPP processing is required before inference, use the memory transferred by the Matrix module as the DVPP input memory, use the HIAI_DVPP_DMalloc API to allocate the DVPP output memory, and use the DVPP output memory as the input memory of the inference engine.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot