更新时间:2021-03-18 GMT+08:00
分享

宏:HIAI_IMPL_ENGINE_PROCESS

用户需要重载实现该宏,用于定义Engine的具体实现。该宏在engine.h中定义。

本宏封装用到了以下函数:

static HIAIEngineFactory* GetInstance();
HIAI_StatusT HIAIEngineFactory::RegisterEngineCreator(const std::string& engine_name,HIAI_ENGINE_FUNCTOR_CREATOR engineCreatorFunc);
HIAI_StatusT HIAIEngineFactory::UnRegisterEngineCreator(const std::string& engine_name);

相关宏:

在HIAI_DEFINE_PROCESS(inputPortNum, outputPortNum)之后调用本宏。

宏格式

HIAI_IMPL_ENGINE_PROCESS(name, engineClass, inPortNum)

参数说明

参数

说明

取值范围

name

Config的Engine名称。

-

engineClass

Engine的实现类名称。

-

inPortNum

输入的端口数。

-

返回值

返回的错误码由用户提前注册。

错误码示例

该API的错误码由用户进行注册。

重载实现样例

定义推理Engine的实现,在实现过程中通过模型管家(AIModelManager)的AIModelManager::Process接口执行模型推理。

HIAI_IMPL_ENGINE_PROCESS("FrameworkerEngine", FrameworkerEngine, FRAMEWORK_ENGINE_INPUT_SIZE)
{
    hiai::AIStatus ret = hiai::SUCCESS;
    HIAI_StatusT hiai_ret = HIAI_OK;
    // receive data
    //arg0表示Engine的输入端口(编号为0),如果有多个输入端口,可依次通过arg1(编号为1的输入端口)、arg2(编号为2的输入端口)等来对应输入端口。通过输入端口获取上一个Engine发送的数据。
    std::shared_ptr<std::string> input_arg =
        std::static_pointer_cast<std::string>(arg0);
    if (nullptr == input_arg)
    {
        HIAI_ENGINE_LOG(this, HIAI_INVALID_INPUT_MSG, "[DEBUG] input arg is invalid");
        return HIAI_INVALID_INPUT_MSG;
    }
    std::cout<<"FrameworkerEngine Process"<<std::endl;

    //  prapare for calling the process of ai_model_manager_
    std::vector<std::shared_ptr<hiai::IAITensor>> input_data_vec;

    uint32_t len = 75264;

    HIAI_ENGINE_LOG("FrameworkerEngine:Go to Process");
    std::cout << "HIAIAippOp::Go to process" << std::endl;
    std::shared_ptr<hiai::AINeuralNetworkBuffer> neural_buffer = std::shared_ptr<hiai::AINeuralNetworkBuffer>(new hiai::AINeuralNetworkBuffer());//std::static_pointer_cast<hiai::AINeuralNetworkBuffer>(input_data);
    neural_buffer->SetBuffer((void*)(input_arg->c_str()), (uint32_t)(len));
    std::shared_ptr<hiai::IAITensor> input_data = std::static_pointer_cast<hiai::IAITensor>(neural_buffer);
    input_data_vec.push_back(input_data);

    //  call Process and inference
    hiai::AIContext ai_context;
    std::vector<std::shared_ptr<hiai::IAITensor>> output_data_vec;
    ret = ai_model_manager_->CreateOutputTensor(input_data_vec, output_data_vec);
    if (hiai::SUCCESS != ret)
    {
        HIAI_ENGINE_LOG(this, HIAI_AI_MODEL_MANAGER_PROCESS_FAIL, "[DEBUG] fail to process ai_model");
        return HIAI_AI_MODEL_MANAGER_PROCESS_FAIL;
    }

    ret = ai_model_manager_->Process(ai_context, input_data_vec, output_data_vec, 0);

    if (hiai::SUCCESS != ret)
    {
        HIAI_ENGINE_LOG(this, HIAI_AI_MODEL_MANAGER_PROCESS_FAIL, "[DEBUG] fail to process ai_model");
        return HIAI_AI_MODEL_MANAGER_PROCESS_FAIL;
    }
    std::cout<<"[DEBUG] output_data_vec size is "<< output_data_vec.size()<<std::endl;
    for (uint32_t index = 0; index < output_data_vec.size(); index++)
    {
        // send data of inference to destEngine
        std::shared_ptr<hiai::AINeuralNetworkBuffer> output_data = std::static_pointer_cast<hiai::AINeuralNetworkBuffer>(output_data_vec[index]);
        std::shared_ptr<std::string> output_string_ptr = std::shared_ptr<std::string>(new std::string((char*)output_data->GetBuffer(), output_data->GetSize()));

        hiai_ret = SendData(0, "string", std::static_pointer_cast<void>(output_string_ptr));
        if (HIAI_OK != hiai_ret)
        {
            HIAI_ENGINE_LOG(this, HIAI_SEND_DATA_FAIL, "fail to send data");
            return HIAI_SEND_DATA_FAIL;
        }
    }
    return HIAI_OK;
}
分享:

    相关文档

    相关产品