Updated on 2024-11-06 GMT+08:00

Querying a Job

Function

This API is used to query jobs.

Calling Method

For details, see Calling APIs.

URI

GET /v1.1/{project_id}/clusters/{cluster_id}/cdm/job/{job_name}

Table 1 Path Parameters

Parameter

Mandatory

Type

Description

project_id

Yes

String

Project ID. For details about how to obtain it, see Project ID and Account ID.

cluster_id

Yes

String

Cluster ID

job_name

Yes

String

Job name. When this parameter is set to all, all jobs are to be queried.

Table 2 Query Parameters

Parameter

Mandatory

Type

Description

filter

No

String

When job_name is all, this parameter is used for fuzzy job filtering.

page_no

No

Integer

Page number

page_size

No

Integer

Number of jobs on each page. The value ranges from 10 to 100.

jobType

No

String

Type of the jobs to be queried

  • jobType=NORMAL_JOB: table/file migration job

  • jobType=BATCH_JOB: entire DB migration job

  • jobType=SCENARIO_JOB: scenario migration job

    If this parameter is not specified, only table/file migration jobs are queried by default.

Request Parameters

Table 3 Request header parameters

Parameter

Mandatory

Type

Description

X-Auth-Token

Yes

String

User token.

It can be obtained by calling the IAM API (value of X-Subject-Token in the response header).

Response Parameters

Status code: 200

Table 4 Response body parameters

Parameter

Type

Description

total

Integer

Number of jobs. The value is 0 for querying a single job.

jobs

Array of Job objects

Job list. For details, see the descriptions of jobs parameters.

page_no

Integer

Page number. Jobs on the specified page will be returned.

page_size

Integer

Number of jobs on each page

Table 5 Job

Parameter

Type

Description

job_type

String

Job type

  • NORMAL_JOB: table/file migration

  • BATCH_JOB: entire DB migration

  • SCENARIO_JOB: scenario migration

from-connector-name

String

Source link type. The corresponding link parameters are as follows:

  • generic-jdbc-connector: link to relational database

  • obs-connector: link to OBS

  • hdfs-connector: link to HDFS

  • hbase-connector: link to HBase and link to CloudTable

  • hive-connector: link to Hive

  • ftp-connector/sftp-connector: link to an FTP or SFTP server

  • mongodb-connector: link to MongoDB

  • redis-connector: link to Redis/DCS

  • kafka-connector: link to Kafka

  • dis-connector: link to DIS

  • elasticsearch-connector: link to Elasticsearch/Cloud Search Service (CSS)

  • dli-connector: link to DLI [This value is not supported.] (tag:hcs,hcs_sm)

  • http-connector: link to HTTP/HTTPS servers (No link parameters are required.)

  • dms-kafka-connector: link to DMS Kafka

to-config-values

ConfigValues object

Destination link parameters, which vary depending on the destination. For details, see Destination Job Parameters.

to-link-name

String

Name of the destination link, that is, the name of the link created through the API used to create a link

driver-config-values

ConfigValues object

Job parameters, such as Retry upon Failure and Concurrent Extractors. For details, see Job Parameter Description.

from-config-values

ConfigValues object

Source link parameters, which vary depending on the source. For details, see Source Job Parameters.

to-connector-name

String

Destination link type. The corresponding link parameters are as follows:

  • generic-jdbc-connector: link to relational database

  • obs-connector: link to OBS

  • hdfs-connector: link to HDFS

  • hbase-connector: link to HBase and link to CloudTable

  • hive-connector: link to Hive

  • ftp-connector/sftp-connector: link to an FTP or SFTP server

  • mongodb-connector: link to MongoDB

  • redis-connector: link to Redis/DCS

  • kafka-connector: link to Kafka

  • dis-connector: link to DIS

  • elasticsearch-connector: link to Elasticsearch/Cloud Search Service (CSS)

  • dli-connector: link to DLI [This value is not supported.] (tag:hcs,hcs_sm)

  • http-connector: link to HTTP/HTTPS (No link parameters are required.)

  • dms-kafka-connector: link to DMS Kafka

name

String

Job name, which contains 1 to 240 characters

from-link-name

String

Name of the source link, that is, the name of the link created through the API used to create a link

creation-user

String

User who created the job. The value is generated by the system.

creation-date

Long

Time when the job was created, accurate to millisecond. The value is generated by the system.

update-date

Long

Time when the job was last updated, accurate to millisecond. The value is generated by the system.

is_incre_job

Boolean

Whether the job is an incremental job. This parameter has been discarded.

flag

Integer

Whether the job is a scheduled job. If yes, the value is 1. Otherwise, the value is 0. The value is generated by the system based on the scheduled task configuration.

files_read

Integer

Number of read files. The value is generated by the system.

update-user

String

User who last updated the job. The value is generated by the system.

external_id

String

ID of the job to be executed. For a local job, the value is in the format of job_local1202051771_0002 . For a DLI job, the value is the DLI job ID, for example, **"12345"**. The value is generated by the system and does not need to be set.

type

String

Job type. The value of this parameter is the same as that of job_type. The options are as follows:

  • NORMAL_JOB: table/file migration job

  • BATCH_JOB: entire DB migration job

  • SCENARIO_JOB: scenario migration job

execute_start_date

Long

Time when the last task was started, accurate to millisecond. The value is generated by the system.

delete_rows

Integer

Number of rows deleted by an incremental job. This parameter is deprecated.

enabled

Boolean

Whether the link is enabled. The value is generated by the system.

bytes_written

Long

Number of bytes written by the job. The value is generated by the system.

id

Integer

Job ID, which is generated by the system

is_use_sql

Boolean

Whether to use SQL statements. The value is generated by the system based on whether SQL statements are used for source data extraction. You do not need to set this parameter.

update_rows

Integer

Number of updated rows in an incremental job. This parameter is deprecated.

group_name

String

Group name

bytes_read

Long

Number of bytes read by the job. The value is generated by the system.

execute_update_date

Long

Time when the last task was updated, accurate to millisecond. The value is generated by the system.

write_rows

Integer

Number of rows written by an incremental job. This parameter is deprecated.

rows_written

Integer

Number of rows written by the job. The value is generated by the system.

rows_read

Long

Number of rows read by the job. The value is generated by the system.

files_written

Integer

Number of written files. The value is generated by the system.

is_incrementing

Boolean

Whether the job is an incremental job. Similar to parameter is_incre_job, this parameter is deprecated.

execute_create_date

Long

Time when the last task was created, accurate to millisecond. The value is generated by the system.

status

String

Job execution status

  • BOOTING: The job is starting.

  • RUNNING: The job is running.

  • SUCCEEDED: The job was successfully executed.

  • FAILED: The job execution failed.

  • NEW: The job was not executed.

Table 6 ConfigValues

Parameter

Type

Description

configs

Array of configs objects

The data structures of source link parameters, destination link parameters, and job parameters are the same. However, the inputs parameter varies. For details, see the descriptions of configs parameters.

extended-configs

extended-configs object

Extended configuration. For details, see the descriptions of extended-configs parameters. The extended configuration is not open to external systems. You do not need to set it.

Table 7 configs

Parameter

Type

Description

inputs

Array of Input objects

Input parameter list. Each element in the list is in name,value format. For details, see the descriptions of inputs parameters. In the from-config-values data structure, the value of this parameter varies with the source link type. For details, see section "Source Job Parameters" in the Cloud Data Migration User Guide. In the to-config-values data structure, the value of this parameter varies with the destination link type. For details, see section "Destination Job Parameters" in the Cloud Data Migration User Guide. For details about the inputs parameter in the driver-config-values data structure, see the job parameter descriptions.

name

String

Configuration name. The value is fromJobConfig for a source job, toJobConfig for a destination job, and linkConfig for a link.

id

Integer

Configuration ID, which is generated by the system. You do not need to set this parameter.

type

String

Configuration type, which is generated by the system. You do not need to set this parameter. The value can be LINK (for link management APIs) or JOB (for job management APIs).

Table 8 Input

Parameter

Type

Description

name

String

Parameter name.

  • For link management APIs, parameter names start with linkConfig.. The parameters vary depending on the link type. For details, see the parameter descriptions of the corresponding link in Link Parameters.

  • For job management APIs, source link parameter names start with fromJobConfig.. For details, see the source job parameters in Source Job Parameters. Destination link parameter names start with toJobConfig.. For details, see Destination Job Parameters. For details about job parameters, see the task parameter descriptions in Job Parameters.

value

Object

Parameter value, which must be a string.

type

String

Value type, such as STRING and INTEGER. The value is set by the system.

Table 9 extended-configs

Parameter

Type

Description

name

String

Extended configuration name. This parameter is unavailable for external systems and does not need to be set.

value

String

Extended configuration value. This parameter is unavailable for external systems and does not need to be set.

Example Requests

GET /v1.1/1551c7f6c808414d8e9f3c514a170f2e/clusters/6ec9a0a4-76be-4262-8697-e7af1fac7920/cdm/job/all?jobType=NORMAL_JOB

Example Responses

Status code: 200

Request succeeded.

{
  "total" : 1,
  "jobs" : [ {
    "job_type" : "NORMAL_JOB",
    "from-connector-name" : "elasticsearch-connector",
    "to-config-values" : {
      "configs" : [ {
        "inputs" : [ {
          "name" : "toJobConfig.streamName",
          "value" : "dis-lkGm"
        }, {
          "name" : "toJobConfig.separator",
          "value" : "|"
        }, {
          "name" : "toJobConfig.columnList",
          "value" : "1&2&3"
        } ],
        "name" : "toJobConfig"
      } ]
    },
    "to-link-name" : "dis",
    "driver-config-values" : {
      "configs" : [ {
        "inputs" : [ {
          "name" : "throttlingConfig.numExtractors",
          "value" : "1"
        }, {
          "name" : "throttlingConfig.submitToCluster",
          "value" : "false"
        }, {
          "name" : "throttlingConfig.numLoaders",
          "value" : "1"
        }, {
          "name" : "throttlingConfig.recordDirtyData",
          "value" : "false"
        } ],
        "name" : "throttlingConfig"
      }, {
        "inputs" : [ ],
        "name" : "jarConfig"
      }, {
        "inputs" : [ {
          "name" : "schedulerConfig.isSchedulerJob",
          "value" : "false"
        }, {
          "name" : "schedulerConfig.disposableType",
          "value" : "NONE"
        } ],
        "name" : "schedulerConfig"
      }, {
        "inputs" : [ ],
        "name" : "transformConfig"
      }, {
        "inputs" : [ {
          "name" : "retryJobConfig.retryJobType",
          "value" : "NONE"
        } ],
        "name" : "retryJobConfig"
      } ]
    },
    "from-config-values" : {
      "configs" : [ {
        "inputs" : [ {
          "name" : "fromJobConfig.index",
          "value" : "52est"
        }, {
          "name" : "fromJobConfig.type",
          "value" : "est_array"
        }, {
          "name" : "fromJobConfig.columnList",
          "value" : "array_f1_int:long&array_f2_text:string&array_f3_object:nested"
        }, {
          "name" : "fromJobConfig.splitNestedField",
          "value" : "false"
        } ],
        "name" : "fromJobConfig"
      } ]
    },
    "to-connector-name" : "dis-connector",
    "name" : "es_css",
    "from-link-name" : "css"
  } ],
  "page_no" : 1,
  "page_size" : 10
}

SDK Sample Code

The SDK sample code is as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
package com.huaweicloud.sdk.test;

import com.huaweicloud.sdk.core.auth.ICredential;
import com.huaweicloud.sdk.core.auth.BasicCredentials;
import com.huaweicloud.sdk.core.exception.ConnectionException;
import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
import com.huaweicloud.sdk.core.exception.ServiceResponseException;
import com.huaweicloud.sdk.cdm.v1.region.CdmRegion;
import com.huaweicloud.sdk.cdm.v1.*;
import com.huaweicloud.sdk.cdm.v1.model.*;


public class ShowJobsSolution {

    public static void main(String[] args) {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        String ak = System.getenv("CLOUD_SDK_AK");
        String sk = System.getenv("CLOUD_SDK_SK");
        String projectId = "{project_id}";

        ICredential auth = new BasicCredentials()
                .withProjectId(projectId)
                .withAk(ak)
                .withSk(sk);

        CdmClient client = CdmClient.newBuilder()
                .withCredential(auth)
                .withRegion(CdmRegion.valueOf("<YOUR REGION>"))
                .build();
        ShowJobsRequest request = new ShowJobsRequest();
        request.withClusterId("{cluster_id}");
        request.withJobName("{job_name}");
        try {
            ShowJobsResponse response = client.showJobs(request);
            System.out.println(response.toString());
        } catch (ConnectionException e) {
            e.printStackTrace();
        } catch (RequestTimeoutException e) {
            e.printStackTrace();
        } catch (ServiceResponseException e) {
            e.printStackTrace();
            System.out.println(e.getHttpStatusCode());
            System.out.println(e.getRequestId());
            System.out.println(e.getErrorCode());
            System.out.println(e.getErrorMsg());
        }
    }
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# coding: utf-8

import os
from huaweicloudsdkcore.auth.credentials import BasicCredentials
from huaweicloudsdkcdm.v1.region.cdm_region import CdmRegion
from huaweicloudsdkcore.exceptions import exceptions
from huaweicloudsdkcdm.v1 import *

if __name__ == "__main__":
    # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
    # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
    ak = os.environ["CLOUD_SDK_AK"]
    sk = os.environ["CLOUD_SDK_SK"]
    projectId = "{project_id}"

    credentials = BasicCredentials(ak, sk, projectId)

    client = CdmClient.new_builder() \
        .with_credentials(credentials) \
        .with_region(CdmRegion.value_of("<YOUR REGION>")) \
        .build()

    try:
        request = ShowJobsRequest()
        request.cluster_id = "{cluster_id}"
        request.job_name = "{job_name}"
        response = client.show_jobs(request)
        print(response)
    except exceptions.ClientRequestException as e:
        print(e.status_code)
        print(e.request_id)
        print(e.error_code)
        print(e.error_msg)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
package main

import (
	"fmt"
	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
    cdm "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/cdm/v1"
	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/cdm/v1/model"
    region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/cdm/v1/region"
)

func main() {
    // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
    // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
    ak := os.Getenv("CLOUD_SDK_AK")
    sk := os.Getenv("CLOUD_SDK_SK")
    projectId := "{project_id}"

    auth := basic.NewCredentialsBuilder().
        WithAk(ak).
        WithSk(sk).
        WithProjectId(projectId).
        Build()

    client := cdm.NewCdmClient(
        cdm.CdmClientBuilder().
            WithRegion(region.ValueOf("<YOUR REGION>")).
            WithCredential(auth).
            Build())

    request := &model.ShowJobsRequest{}
	request.ClusterId = "{cluster_id}"
	request.JobName = "{job_name}"
	response, err := client.ShowJobs(request)
	if err == nil {
        fmt.Printf("%+v\n", response)
    } else {
        fmt.Println(err)
    }
}

For SDK sample code of more programming languages, see the Sample Code tab in API Explorer. SDK sample code can be automatically generated.

Status Codes

Status Code

Description

200

Request succeeded.

Error Codes

See Error Codes.