AIModelManager::Process
Processes input data with a model. When inference is performed on a single model, the buffer queue length of the process is limited to 2048 bytes. If the buffer queue length exceeds 2048 bytes, a failure message is returned.
Syntax
virtual AIStatus AIModelManager::Process(AIContext &context, const std::vector<std::shared_ptr<IAITensor>> &in_data, std::vector<std::shared_ptr<IAITensor>> &out_data, uint32_t timeout);
Parameter Description
Parameter |
Description |
Value Range |
---|---|---|
context |
Context information, including the configurations of variable parameters when the engine is running For details about the syntax of the AIContext data type, see AIContext. |
- |
in_data |
List of input tensors for a model For details about the syntax of the IAITensor data type, see IAITensor. |
- |
out_data |
List of output tensors for a model For details about the syntax of the IAITensor data type, see IAITensor. |
- |
timeout |
Calculation timeout. This parameter is reserved. The default value is 0, indicating that the configuration is invalid. |
- |
Return Value
SUCCESS indicates that initialization succeeds, while FAILED indicates that initialization fails.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot