Example - Model Management
The following is an example of model management:
#! /usr/bin/python3.7 import hilens import numpy as np def run(): # Construct a camera. cap = hilens.VideoCapture() # Obtain a frame of image. The image obtained by the built-in camera is in YUV_NV21 format. The default resolution is 720p. frame = cap.read() # Load the model. # filepath cannot be a file name only. If the model and program are in the same directory, the relative path should be ./my_model.om. # If the model is added on the skill development page, use hilens.get_model_dir() to obtain the directory where the model is located. The directory should be as follows: # model = hilens.Model(hilens.get_model_dir() + "my_model.om") # If there are multiple models, load them separately. model1 = hilens.Model("./my_model1.om") model2 = hilens.Model("./my_model2.om") model3 = hilens.Model("./my_model3.om") # Assume that the input of model 1 is a 480 x 480 YUV_NV21 image and the data type is uint8. pro = hilens.Preprocessor() input1 = pro.resize(frame, 480, 480, 1) input1 = input1.flatten() # Perform inference. output1 = model1.infer([input1]) # Assume that the input of model 2 is the output (list) of model 1 and the data type is float32. input2 = output1 # Perform inference. output2 = model2.infer(input2) # Assume that model 3 has multiple inputs and the data type is float32. ip_0 = (sample_data[0]).transpose(0, 3, 1, 2).astype(np.float32).flatten() ip_1 = (sample_data[1]).transpose(0, 3, 1, 2).astype(np.float32).flatten() ip_2 = (sample_data[2]).transpose(0, 3, 1, 2).astype(np.float32).flatten() ip_3 = (sample_data[3]).transpose(0, 3, 1, 2).astype(np.float32).flatten() ip_4 = (sample_data[4]).transpose(0, 3, 1, 2).astype(np.float32).flatten() # Perform inference. output3 = model3.infer([ip_0, ip_1, ip_2, ip_3, ip_4]) # Other processing pass if __name__ == '__main__': hilens.init("hello") run() hilens.terminate()
If the actual inference input is different from the model input, the inference will fail. In this case, the returned value of inference will be an int error code, and error information will be recorded in logs. You can locate the error based on the error information. The following is an example:
>>> input0 = np.zeros((480*480*3), dtype='uint8') >>> outputs = model.infer([input0]) 2019-09-30 18:44:24,075 [ERROR][SFW] Ascend 310: aiModelManager Process failed, please check your input. Model info: inputTensorVec[0]: name=data n=1 c=3 h=480 w=480 size=345600 outputTensorVec[0]: name=output_0_reg_reshape_1_0 n=1 c=6750 h=1 w=1 size=27000 your input size:0: 691200; >>> outputs 17 >>> type(outputs) <class 'int'>
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot