Updated on 2022-03-13 GMT+08:00

AIModelManager::Process

Processes input data with a model. When inference is performed on a single model, the buffer queue length of the process is limited to 2048 bytes. If the buffer queue length exceeds 2048 bytes, a failure message is returned.

Syntax

virtual AIStatus AIModelManager::Process(AIContext &context, const std::vector<std::shared_ptr<IAITensor>> &in_data, std::vector<std::shared_ptr<IAITensor>> &out_data, uint32_t timeout);

Parameter Description

Parameter

Description

Value Range

context

Context information, including the configurations of variable parameters when the engine is running

For details about the syntax of the AIContext data type, see AIContext.

-

in_data

List of input tensors for a model

For details about the syntax of the IAITensor data type, see IAITensor.

-

out_data

List of output tensors for a model

For details about the syntax of the IAITensor data type, see IAITensor.

-

timeout

Calculation timeout. This parameter is reserved. The default value is 0, indicating that the configuration is invalid.

-

Return Value

SUCCESS indicates that initialization succeeds, while FAILED indicates that initialization fails.