Help Center/ ModelArts/ API Reference/ AI Application Management/ Querying the AI Application List
Updated on 2025-08-20 GMT+08:00

Querying the AI Application List

Function

This API is used to query the AI application list based on different search parameters.

Debugging

You can debug this API through automatic authentication in API Explorer or use the SDK sample code generated by API Explorer.

URI

GET /v1/{project_id}/models

Table 1 Path Parameters

Parameter

Mandatory

Type

Description

project_id

Yes

String

Project ID. For details, see Obtaining a Project ID and Name.

Table 2 Query Parameters

Parameter

Mandatory

Type

Description

model_name

No

String

Model name. Fuzzy match is supported. If a model name contains an underscore (_), add the exact_match parameter to the request and set the parameter value to true because the underscore needs to be escaped. This ensures that the query operation can be performed properly.

exact_match

No

Boolean

Whether to escape underscores (). If a model name contains underscores (), set this parameter to true to ensure that the query operation can be performed properly. By default, this parameter is left blank.

model_version

No

String

Model version The format is Digit:Digit:Digit, where Digit is a one-digit or two-digit positive integer. Note that the version number cannot start with 0, for example, 01.01.01.

model_status

No

String

Model status. You can obtain models based on model statuses. Options:

  • publishing: The model is being published.

  • published: The model has been published.

  • failed: Publishing the model failed.

  • building: The image is being created.

  • building_failed: Creating an image failed.

description

No

String

Description. Fuzzy match is supported.

offset

No

Integer

Index of the query page, which defaults to 0

limit

No

Integer

Maximum number of records returned on each page. Default value: 1000

sort_by

No

String

Sorting field. Enums:

  • create_at: time when an AI application is created (default value)

  • model_version: AI application version

  • model_size: AI application size

order

No

String

Sorting order. Enums:

  • asc: ascending order

  • desc: descending order (default value)

workspace_id

No

String

Workspace ID For details about how to obtain the value, see Querying the Workspace List. If no workspace is created, the default value is 0. If a workspace is created and used, the actual value prevails.

model_type

No

String

Model type. The models of this type are queried. model_type and not_model_type are mutually exclusive and cannot co-exist. The value can be TensorFlow, PyTorch, MindSpore, Image, Custom or Template.

not_model_type

No

String

Model type, which is used for obtaining models of types except for this type The value can be TensorFlow, PyTorch, MindSpore, Image, Custom or Template.

Request Parameters

Table 3 Request header parameters

Parameter

Mandatory

Type

Description

X-Auth-Token

Yes

String

User token. It can be obtained by calling the IAM API that is used to obtain a user token. The value of X-Subject-Token in the response header is the user token.

Response Parameters

Status code: 200

Table 4 Response body parameters

Parameter

Type

Description

models

Array of ModelListItem objects

Model metadata

total_count

Integer

Total number of models that meet the search criteria when no paging is performed

count

Integer

Total number of models that meet the search criteria

Table 5 ModelListItem

Parameter

Type

Description

owner

String

User ID of the tenant to which a model belongs

model_version

String

Model version

model_type

String

Model type

description

String

Model description

project

String

Project ID of the tenant to which a model belongs

source_type

String

Model source type. This parameter is valid and its value is auto only if the model is deployed using ExeML.

model_id

String

Model ID

model_source

String

Model source. Options:

  • auto: ExeML

  • algos: built-in algorithm

  • custom: custom model

install_type

Array of strings

Deployment types supported by a model

model_size

Integer

Model size, in bytes

workspace_id

String

Workspace ID For details about how to obtain the value, see Querying the Workspace List. If no workspace is created, the default value is 0. If a workspace is created and used, the actual value prevails.

model_status

String

Model status

market_flag

Boolean

Whether a model is subscribed from AI Gallery

tunable

Boolean

Whether a model can be tuned. true indicates that the model can be tuned, and false indicates not.

model_name

String

Model name

create_at

Long

Time when a model is created, in milliseconds calculated from 1970.1.1 0:0:0 UTC.

publishable_flag

Boolean

Whether a model can be published to AI Gallery

source_copy

String

Whether to enable image replication. This parameter is valid only when model_type is set to Image.

  • true: Image replication is enabled. After this function is enabled, AI applications cannot be rapidly created, and modifying or deleting an image in the SWR source directory will not affect service deployment.

  • false: Image replication is not enabled. After this function is disabled, AI applications can be rapidly created, but modifying or deleting an image in the SWR source directory will affect service deployment.

If this parameter is not configured, image replication is enabled by default.

tenant

String

Account ID of the tenant to which a model belongs

subscription_id

String

Model subscription ID

extra

String

Extended parameter

specification

ModelSpecification object

Deployment specifications.

deployment_constraints

DeploymentConstraints object

Deployment constraints.

Table 6 ModelSpecification

Parameter

Type

Description

min_cpu

String

Minimal CPU specifications

min_gpu

String

Minimal GPU specifications

min_memory

String

Minimum memory

min_ascend

String

Minimal Ascend specifications

Table 7 DeploymentConstraints

Parameter

Type

Description

accelerators

Array of Accelerator objects

Supported accelerator card types. Custom images support only one accelerator card type during import.

cpu_type

String

CPU type.

input_types

Array of strings

Input and output type in asynchronous mode and video service scenarios, such as OBS and VIS. This parameter is used for importing custom images. Non-custom images are declared in runtime.

output_types

Array of strings

Input and output type in asynchronous mode and video service scenarios, such as OBS and DIS. This parameter is used for importing custom images. Non-custom images are declared in runtime.

request_mode

String

Request model of a job when the model is deployed as a service.

rsa

Rsa object

Used for secure communication between the container and inference platform or field encryption. This parameter is used for importing custom images. Non-custom images are declared in runtime.

service_config

String

Service deployment field, which can be specified during service deployment. This parameter is used for importing custom images. Non-custom images are declared in runtime. As the structure is rather complex, use the value in the XML format.

task_config

String

Field of job-related configurations, which can be specified during job creation. This parameter is used for importing custom images. Non-custom images are declared in runtime. As the structure is rather complex, use the value in the XML format.

model_security

ModelSecurity object

Model encryption and decryption parameters.

Table 8 Accelerator

Parameter

Type

Description

type

String

Accelerator card type. The options are as follows:

  • npu

  • gpu

  • none

name

String

Accelerator card name, for example, SNT9B.

cuda_version

String

CUDA driver version.

driver_version_section

String

Driver version set.

Table 9 Rsa

Parameter

Type

Description

mode

String

Encryption mode.

private_key

String

Private key.

public_key

String

Public key.

Table 10 ModelSecurity

Parameter

Type

Description

model_key

String

After the model is encrypted using Edge, the root key, model key, and encrypted model can be obtained.

root_key

String

After the model is encrypted using Edge, the root key, model key, and encrypted model can be obtained.

security_policy

String

Model encryption mode.

is_verify_app

Boolean

Specifies whether to verify the files in edge scenarios, including the configuration files, container images, and library files.

Example Requests

GET https://{endpoint}/v1/{project_id}/models

Example Responses

Status code: 200

Models

{
  "total_count" : 1,
  "count" : 1,
  "models" : [ {
    "model_name" : "mnist",
    "model_version" : "1.0.0",
    "model_id" : "10eb0091-887f-4839-9929-cbc884f1e20e",
    "model_type" : "tensorflow",
    "model_size" : 5012312,
    "tenant" : "6d28e85aa78b4e1a9b4bd83501bcd4a1",
    "project" : "d04c10db1f264cfeb1966deff1a3527c",
    "owner" : "6d28e85aa78b4e1a9b4bd83501bcd4a1",
    "create_at" : 1533041553000,
    "description" : "mnist model",
    "workspace_id" : "0",
    "specification" : { }
  } ]
}

Status Codes

Status Code

Description

200

Models

Error Codes

See Error Codes.