Updated on 2024-04-30 GMT+08:00

Importing a Meta Model from OBS

In scenarios where frequently-used frameworks are used for model development and training, you can import the model to ModelArts and use it to create an AI application for unified management.

Constraints

Prerequisites

  • The model has been developed and trained, and the type and version of the AI engine used by the model are supported by ModelArts. For details, see Supported AI Engines for ModelArts Inference.
  • The trained model package, inference code, and configuration file have been uploaded to OBS.
  • The OBS directory you use and ModelArts are in the same region.

Creating an AI Application

  1. Log in to the ModelArts management console, and choose AI Application Management > AI Applications in the left navigation pane. The AI Applications page is displayed.
  2. Click Create in the upper left corner.
  3. On the displayed page, set the parameters.
    1. Set basic information about the AI application. For details about the parameters, see Table 1.
      Table 1 Parameters of basic AI application information

      Parameter

      Description

      Name

      Application name. The value can contain 1 to 64 visible characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

      Version

      Version of the AI application to be created. For the first import, the default value is 0.0.1.

      NOTE:

      After an AI application is created, you can create new versions using different meta models for optimization.

      Description

      Brief description of an AI application

    2. Select the meta model source and set related parameters. Set Meta Model Source to OBS. For details about the parameters, see Table 2.

      For the meta model imported from OBS, edit the inference code and configuration files by following model package specifications and place the inference code and configuration files in the model folder storing the meta model. If the selected directory does not comply with the model package specifications, the AI application cannot be created.

      Table 2 Parameters of the meta model source

      Parameter

      Description

      Meta Model

      OBS path for storing the meta model.

      The OBS path cannot contain spaces. Otherwise, the AI application fails to be created.

      AI Engine

      The AI engine automatically associates with the meta model storage path you select.

      If AI Engine is set to Custom, you must specify the protocol and port number in Container API for starting the model. The request protocol is HTTPS, and the port number is 8080.

      Health Check

      Health check on a model. After you select an AI engine that supports health check and runtime environment, this parameter is displayed. When AI Engine is set to Custom, you must configure health check in the image. Otherwise, the service deployment will fail.
      • Check Mode: Select HTTP request or Command.

        When a custom engine is used, you can select HTTP request or Command.

        When a non-custom engine is used, you can select only HTTP request.

      • Health Check URL: This parameter is displayed when Check Mode is set to HTTP request. Enter the health check URL. The default value is /health.
      • Health Check Command: This parameter is displayed when Check Mode is set to Command. Enter the health check command.
      • Health Check Period: Enter an integer ranging from 1 to 2147483647. The unit is second.
      • Delay( seconds ): specifies the delay for performing the health check after the instance is started. Enter an integer ranging from 0 to 2147483647.
      • Maximum Failures: Enter an integer ranging from 1 to 2147483647. During service startup, if the number of consecutive health check failures reaches the specified value, the service will be abnormal. During service running, if the number of consecutive health check failures reaches the specified value, the service will enter the alarm status.
      NOTE:

      To use a custom engine to create an AI application, ensure that the custom engine complies with the specifications for custom engines. For details, see Creating an AI Application Using a Custom Engine.

      If health check is configured for an AI application, the deployed services using this AI application will stop 3 minutes after receiving the stop instruction.

      Dynamic Loading

      Quick deployment and model update. If it is selected, model files and runtime dependencies are only pulled during an actual deployment. Enable this function if a single model file is larger than 5 GB.

      Runtime Dependency

      List the dependencies of the selected model in the environment. For example, if tensorflow is used and the installation method is pip, the version must be 1.8.0 or later.

      AI Application Description

      Provide AI application descriptions to help other AI application developers better understand and use your applications. Click Add AI Application Description and set the Document name and URL. You can add up to three AI application descriptions.

      Configuration File

      By default, the system associates the configuration file stored in OBS. After enabling this function, you can view and edit the model configuration file.

      NOTE:

      This function is to be taken offline. After that, you can modify the model configuration by setting AI Engine, Runtime Dependency, and Apis.

      Deployment Type

      Select the service types that the application can be deployed. When deploying a service, only the service types selected here are available. For example, if you only select Real-time services here, you can only deploy the AI application as a real-time service after it is created.

      API Configuration

      After enabling this function, you can edit RESTful APIs to define the input and output formats of an AI application. The model APIs must comply with ModelArts specifications. For details, see Specifications for Editing a Model Configuration File. For details about the code example, see Code Example of apis Parameters.

    3. Check the information and click Create now. The AI application is created.

      In the AI application list, you can view the created AI application and its version. When the status changes to Normal, the AI application is successfully created. On this page, you can perform such operations as creating new versions and quickly deploying services.

Follow-Up Procedure

Deploying an AI Application as a Service: In the AI application list, click the option button on the left of the AI application name to display the version list at the bottom of the list page. Locate the row that contains the target version, click Deploy in the Operation column to deploy the AI application as a deployment type selected during AI application creation.