Help Center/ Huawei HiLens/ Developer Guide/ Model Management Module/ Performing Inference Using a Model
Updated on 2022-02-22 GMT+08:00

Performing Inference Using a Model

After a model is initialized, call the inference API to perform inference using the model. Import a group of data and obtain the inference result. If the type of the input data is not a list consisting of uint8 or float32 arrays, "ValueError" is reported.

  • API calling

    hilens.Model.infer(inputs)

  • Parameter description
    Table 1 Parameters

    Parameter

    Mandatory

    Type

    Description

    inputs

    Yes

    List

    Inference input. It is a list consisting of a group of uint8 or float32 arrays. Multiple inputs are supported.

  • Return value

    Model output, which is a list consisting of a group of float arrays. Multiple outputs are supported.