Updated on 2024-10-29 GMT+08:00

Importing a Meta Model from a Container Image

For AI engines that are not supported by ModelArts, you can import the models from custom images.

Constraints

Prerequisites

The OBS directory you use and ModelArts are in the same region.

Procedure

  1. Log in to the ModelArts console. In the navigation pane, choose AI Applications.
  2. Click Create Applications.
  3. Configure parameters.
    1. Enter basic information. For details, see Table 1.
      Table 1 Basic information

      Parameter

      Description

      Name

      Name of the AI application. The value can contain 1 to 64 visible characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

      Version

      Version of the AI application. The default value is 0.0.1 for the first import.

      NOTE:

      After an AI application is created, you can create new versions using different meta models for optimization.

      Description

      Brief description of the AI application.

    2. Select the meta model source and configure related parameters. Set Meta Model Source to Container image. For details, see Table 2.
      Figure 1 Importing a meta model from a container image

      Table 2 Meta model source parameters

      Parameter

      Description

      Container Image Path

      Click to import the container image. You do not need to use swr_location in the configuration file to specify the image location.

      For details about how to create a custom image, see Specifications for Custom Images Used for Importing Models.

      NOTE:

      The model image you select will be shared with the system administrator, so ensure you have the permission to share the image (images shared by other accounts are not supported). ModelArts will deploy the image as an inference service. Ensure that your image can be properly started and provide an inference API.

      Container API

      Set the protocol and port number of the inference API defined by the model.

      NOTE:

      The default request protocol and port number provided by ModelArts are HTTP and 8080, respectively. Set them based on the actual custom image.

      Image Replication

      Indicates whether to copy the model image in the container image to ModelArts.

      • After this feature is disabled, the model image is not copied, AI applications can be rapidly created, but modifying or deleting an image in the SWR source directory will affect service deployment.
      • After this feature is enabled, the model image is copied, AI applications cannot be rapidly created, and modifying or deleting an image in the SWR source directory will not affect service deployment.
      NOTE:

      You must enable this feature if you want to use images shared by others. Otherwise, AI applications will fail to be created.

      Health Check

      Specifies health check on an AI application. This parameter is configurable only when a health check API is configured in the custom image. Otherwise, creating the AI application will fail. The following probes are supported:

      • Startup Probe: This probe checks if the application instance has started. If a startup probe is provided, all other probes are disabled until it succeeds. If the startup probe fails, the instance is restarted. If no startup probe is provided, the default status is Success.
      • Readiness Probe: This probe verifies whether the application instance is ready to handle traffic. If the readiness probe fails (meaning the instance is not ready), the instance is taken out of the service load balancing pool. Traffic will not be routed to the instance until the probe succeeds.
      • Liveness Probe: This probe monitors the application health status. If the liveness probe fails (indicating the application is unhealthy), the instance is automatically restarted.

      The parameters of the three types of probes are as follows:

      • Check Mode: Select HTTP request or Command.
      • Health Check URL: Enter the health check URL, which defaults to /health. This parameter is displayed when Check Mode is set to HTTP request.
      • Health Check Command: Enter the health check command. This parameter is displayed when Check Mode is set to Command.
      • Health Check Period (s): Enter an integer ranging from 1 to 2147483647.
      • Delay (s): Set a delay for the health check to occur after the instance has started. The value should be an integer between 0 and 2147483647.
      • Timeout (s): Set the timeout interval for each health check. The value should be an integer between 0 and 2147483647.
      • Maximum Failures: Enter an integer ranging from 1 to 2147483647. If the service fails the specified number of consecutive health checks during startup, it will enter the abnormal state. If the service fails the specified number of consecutive health checks during operation, it will enter the alarm state.
      NOTE:

      If health check is enabled for an AI application, the associated services will stop three minutes after receiving the stop instruction.

      AI Application Description

      AI application descriptions to help other developers better understand and use your application. Click Add AI Application Description and set the document name and URL. You can add up to three descriptions.

      Deployment Type

      Choose the service types for application deployment. The service types you select will be the only options available for deployment. For example, selecting Real-Time Services means the AI application can only be deployed as real-time services.

      Start Command

      Customizable start command of a model.

      NOTE:

      Start commands containing $, |, >, <, `, !, \n, \, ?, -v, --volume, --mount, --tmpfs, --privileged, or --cap-add will be emptied when an AI application is being published.

      API Configuration

      You can enable it to edit RESTful APIs to define the AI application input and output formats. The model APIs must comply with ModelArts specifications. For details, see the apis parameter description in Specifications for Editing a Model Configuration File. For details about the code example, see Code Example of apis Parameters.

    3. Check the information and click Create now.

      In the AI application list, you can view the created AI application and its version. When the status changes to Normal, the AI application is created. On this page, you can perform such operations as creating new versions and quickly deploying services.

Follow-Up Operations

Deploying a service: In the AI application list, click Deploy in the Operation column of the target AI application. Locate the target version, click Deploy and choose a service type selected during AI application creation.