Help Center/ Atlas 300 Application (Model 3000)/ Tuning Guide/ Key Points/ Processing the Input and Output Data of Algorithm Inference
Updated on 2022-03-13 GMT+08:00

Processing the Input and Output Data of Algorithm Inference

To avoid memory copy during algorithm inference, you are advised to use the HIAI_DMalloc API to allocate memory for the input and output data when calling the process API of the model manager. In this way, zero-copy can be achieved and the processing time is optimized. If DVPP processing is required before inference, use the memory transferred by the Matrix module as the DVPP input memory, use the HIAI_DVPP_DMalloc API to allocate the DVPP output memory, and use the DVPP output memory as the input memory of the inference engine.