Querying a Job
Function
This API is used to query jobs.
Calling Method
For details, see Calling APIs.
URI
GET /v1.1/{project_id}/clusters/{cluster_id}/cdm/job/{job_name}
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
project_id |
Yes |
String |
Project ID. For details about how to obtain the project ID, see Project ID and Account ID. |
cluster_id |
Yes |
String |
Cluster ID |
job_name |
Yes |
String |
Job name. When this parameter is set to all, all jobs are to be queried. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
filter |
No |
String |
When job_name is all, this parameter is used for fuzzy job filtering. |
page_no |
No |
Integer |
Page number Minimum: 1 |
page_size |
No |
Integer |
Number of jobs on each page. The value ranges from 10 to 100. Minimum: 10 Maximum: 100 |
jobType |
No |
String |
Type of the jobs to be queried
Enumeration values:
|
Request Parameters
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
X-Auth-Token |
Yes |
String |
User token. It can be obtained by calling the IAM API (value of X-Subject-Token in the response header). |
Response Parameters
Status code: 200
Parameter |
Type |
Description |
---|---|---|
total |
Integer |
Number of jobs |
jobs |
Array of Job objects |
Job list. For details, see the descriptions of jobs parameters. |
page_no |
Integer |
Page number. Jobs on the specified page will be returned. |
page_size |
Integer |
Number of jobs on each page. |
Parameter |
Type |
Description |
---|---|---|
job_type |
String |
Job type
Enumeration values:
|
from-connector-name |
String |
Source link type. The available values are as follows: generic-jdbc-connector: link to a relational database obs-connector: link to OBS hdfs-connector: link to HDFS hbase-connector: link to HBase or CloudTable hive-connector: link to Hive ftp-connector/sftp-connector: link to an FTP or SFTP server mongodb-connector: link to MongoDB redis-connector: link to Redis or DCS kafka-connector: link to Kafka dis-connector: link to DIS elasticsearch-connector: link to Elasticsearch or Cloud Search Service (CSS) dli-connector: link to DLI. http-connector: link to an HTTP or HTTPS server (No link parameters are required.) dms-kafka-connector: link to DMS for Kafka |
to-config-values |
ConfigValues object |
Destination link parameters, which vary depending on the destination. For details, see Destination Job Parameters. |
to-link-name |
String |
Name of the destination link, that is, the name of the link created through the API used to create a link |
driver-config-values |
ConfigValues object |
Job parameters, such as Retry upon Failure and Concurrent Extractors. For details, see Job Parameter Description. |
from-config-values |
ConfigValues object |
Source link parameters, which vary depending on the source. For details, see Source Job Parameters. |
to-connector-name |
String |
Destination link type. The available values are as follows: generic-jdbc-connector: link to a relational database obs-connector: link to OBS hdfs-connector: link to HDFS hbase-connector: link to HBase or CloudTable hive-connector: link to Hive ftp-connector/sftp-connector: link to an FTP or SFTP server mongodb-connector: link to MongoDB redis-connector: link to Redis or DCS kafka-connector: link to Kafka dis-connector: link to DIS elasticsearch-connector: link to Elasticsearch or Cloud Search Service (CSS) dli-connector: link to DLI. http-connector: link to an HTTP or HTTPS server (No link parameters are required.) dms-kafka-connector: link to DMS for Kafka |
name |
String |
Job name, which contains 1 to 240 characters Minimum: 1 Maximum: 240 |
from-link-name |
String |
Name of the source link, that is, the name of the link created through the API used to create a link |
creation-user |
String |
User who created the job. The value is generated by the system. |
creation-date |
Long |
Time when the job was created, accurate to millisecond. The value is generated by the system. |
update-date |
Long |
Time when the job was last updated, accurate to millisecond. The value is generated by the system. |
is_incre_job |
Boolean |
Whether the job is an incremental job. This parameter is deprecated. |
flag |
Integer |
Whether the job is a scheduled job. If yes, the value is 1. Otherwise, the value is 0. The value is generated by the system based on the scheduled task configuration. |
files_read |
Integer |
Number of read files. The value is generated by the system. |
update-user |
String |
User who last updated the job. The value is generated by the system. |
external_id |
String |
ID of the job to be executed. For a local job, the value is in the format of job_local1202051771_0002 . For a DLI job, the value is the DLI job ID, for example, **"12345"**. The value is generated by the system and does not need to be set. |
type |
String |
Job type. The value of this parameter is the same as that of job_type. The options are as follows:
|
execute_start_date |
Long |
Time when the last task was started, accurate to millisecond. The value is generated by the system. |
delete_rows |
Integer |
Number of rows deleted by an incremental job. This parameter is deprecated. |
enabled |
Boolean |
Whether the link is enabled. The value is generated by the system. |
bytes_written |
Long |
Number of bytes written by the job. The value is generated by the system. |
id |
Integer |
Job ID, which is generated by the system |
is_use_sql |
Boolean |
Whether SQL statements are used. The value is generated by the system based on whether SQL statements are used at the source. |
update_rows |
Integer |
Number of updated rows in an incremental job. This parameter is deprecated. |
group_name |
String |
Group name |
bytes_read |
Long |
Number of bytes read by the job. The value is generated by the system. |
execute_update_date |
Long |
Time when the last task was updated, accurate to millisecond. The value is generated by the system. |
write_rows |
Integer |
Number of rows written by an incremental job. This parameter is deprecated. |
rows_written |
Integer |
Number of rows written by the job. The value is generated by the system. |
rows_read |
Long |
Number of rows read by the job. The value is generated by the system. |
files_written |
Integer |
Number of written files. The value is generated by the system. |
is_incrementing |
Boolean |
Whether the job is an incremental job. Similar to parameter is_incre_job, this parameter is deprecated. |
execute_create_date |
Long |
Time when the last task was created, accurate to millisecond. The value is generated by the system. |
status |
String |
Job execution status
|
Parameter |
Type |
Description |
---|---|---|
configs |
Array of configs objects |
The data structures of source link parameters, destination link parameters, and job parameters are the same. However, the inputs parameter varies. For details, see the descriptions of configs parameters. |
extended-configs |
extended-configs object |
Extended configuration. For details, see the descriptions of extended-configs parameters. The extended configuration is not open to external systems. You do not need to set it. |
Parameter |
Type |
Description |
---|---|---|
inputs |
Array of Input objects |
Input parameter list. Each element in the list is in name,value format. For details, see the descriptions of inputs parameters. In the from-config-values data structure, the value of this parameter varies with the source link type. For details, see section "Source Job Parameters" in the Cloud Data Migration User Guide. In the to-config-values data structure, the value of this parameter varies with the destination link type. For details, see section "Destination Job Parameters" in the Cloud Data Migration User Guide. For details about the inputs parameter in the driver-config-values data structure, see the job parameter descriptions. |
name |
String |
Configuration name. The value is fromJobConfig for a source job, toJobConfig for a destination job, and linkConfig for a link. |
id |
Integer |
Configuration ID, which is generated by the system. You do not need to set this parameter. |
type |
String |
Configuration type, which is generated by the system. You do not need to set this parameter. The value can be LINK (for link management APIs) or JOB (for job management APIs). |
Parameter |
Type |
Description |
---|---|---|
name |
String |
Parameter name.
|
value |
String |
Parameter value, which must be a string. |
type |
String |
Value type, such as STRING and INTEGER. The value is set by the system. |
Example Requests
GET /v1.1/1551c7f6c808414d8e9f3c514a170f2e/clusters/6ec9a0a4-76be-4262-8697-e7af1fac7920/cdm/job/all?jobType=NORMAL_JOB
Example Responses
Status code: 200
ok
{ "total" : 1, "jobs" : [ { "job_type" : "NORMAL_JOB", "from-connector-name" : "elasticsearch-connector", "to-config-values" : { "configs" : [ { "inputs" : [ { "name" : "toJobConfig.streamName", "value" : "dis-lkGm" }, { "name" : "toJobConfig.separator", "value" : "|" }, { "name" : "toJobConfig.columnList", "value" : "1&2&3" } ], "name" : "toJobConfig" } ] }, "to-link-name" : "dis", "driver-config-values" : { "configs" : [ { "inputs" : [ { "name" : "throttlingConfig.numExtractors", "value" : "1" }, { "name" : "throttlingConfig.submitToCluster", "value" : "false" }, { "name" : "throttlingConfig.numLoaders", "value" : "1" }, { "name" : "throttlingConfig.recordDirtyData", "value" : "false" } ], "name" : "throttlingConfig" }, { "inputs" : [ ], "name" : "jarConfig" }, { "inputs" : [ { "name" : "schedulerConfig.isSchedulerJob", "value" : "false" }, { "name" : "schedulerConfig.disposableType", "value" : "NONE" } ], "name" : "schedulerConfig" }, { "inputs" : [ ], "name" : "transformConfig" }, { "inputs" : [ { "name" : "retryJobConfig.retryJobType", "value" : "NONE" } ], "name" : "retryJobConfig" } ] }, "from-config-values" : { "configs" : [ { "inputs" : [ { "name" : "fromJobConfig.index", "value" : "52est" }, { "name" : "fromJobConfig.type", "value" : "est_array" }, { "name" : "fromJobConfig.columnList", "value" : "array_f1_int:long&array_f2_text:string&array_f3_object:nested" }, { "name" : "fromJobConfig.splitNestedField", "value" : "false" } ], "name" : "fromJobConfig" } ] }, "to-connector-name" : "dis-connector", "name" : "es_css", "from-link-name" : "css" } ], "page_no" : 1, "page_size" : 10 }
SDK Sample Code
The SDK sample code is as follows.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
package com.huaweicloud.sdk.test; import com.huaweicloud.sdk.core.auth.ICredential; import com.huaweicloud.sdk.core.auth.BasicCredentials; import com.huaweicloud.sdk.core.exception.ConnectionException; import com.huaweicloud.sdk.core.exception.RequestTimeoutException; import com.huaweicloud.sdk.core.exception.ServiceResponseException; import com.huaweicloud.sdk.cdm.v1.region.cdmRegion; import com.huaweicloud.sdk.cdm.v1.*; import com.huaweicloud.sdk.cdm.v1.model.*; public class ShowJobsSolution { public static void main(String[] args) { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment String ak = System.getenv("CLOUD_SDK_AK"); String sk = System.getenv("CLOUD_SDK_SK"); String projectId = "{project_id}"; ICredential auth = new BasicCredentials() .withProjectId(projectId) .withAk(ak) .withSk(sk); cdmClient client = cdmClient.newBuilder() .withCredential(auth) .withRegion(cdmRegion.valueOf("<YOUR REGION>")) .build(); ShowJobsRequest request = new ShowJobsRequest(); request.withClusterId("{cluster_id}"); request.withJobName("{job_name}"); try { ShowJobsResponse response = client.showJobs(request); System.out.println(response.toString()); } catch (ConnectionException e) { e.printStackTrace(); } catch (RequestTimeoutException e) { e.printStackTrace(); } catch (ServiceResponseException e) { e.printStackTrace(); System.out.println(e.getHttpStatusCode()); System.out.println(e.getRequestId()); System.out.println(e.getErrorCode()); System.out.println(e.getErrorMsg()); } } } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
# coding: utf-8 from huaweicloudsdkcore.auth.credentials import BasicCredentials from huaweicloudsdkcdm.v1.region.cdm_region import cdmRegion from huaweicloudsdkcore.exceptions import exceptions from huaweicloudsdkcdm.v1 import * if __name__ == "__main__": # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak = __import__('os').getenv("CLOUD_SDK_AK") sk = __import__('os').getenv("CLOUD_SDK_SK") projectId = "{project_id}" credentials = BasicCredentials(ak, sk, projectId) \ client = cdmClient.new_builder() \ .with_credentials(credentials) \ .with_region(cdmRegion.value_of("<YOUR REGION>")) \ .build() try: request = ShowJobsRequest() request.cluster_id = "{cluster_id}" request.job_name = "{job_name}" response = client.show_jobs(request) print(response) except exceptions.ClientRequestException as e: print(e.status_code) print(e.request_id) print(e.error_code) print(e.error_msg) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
package main import ( "fmt" "github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic" cdm "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/cdm/v1" "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/cdm/v1/model" region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/cdm/v1/region" ) func main() { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak := os.Getenv("CLOUD_SDK_AK") sk := os.Getenv("CLOUD_SDK_SK") projectId := "{project_id}" auth := basic.NewCredentialsBuilder(). WithAk(ak). WithSk(sk). WithProjectId(projectId). Build() client := cdm.NewcdmClient( cdm.cdmClientBuilder(). WithRegion(region.ValueOf("<YOUR REGION>")). WithCredential(auth). Build()) request := &model.ShowJobsRequest{} request.ClusterId = "{cluster_id}" request.JobName = "{job_name}" response, err := client.ShowJobs(request) if err == nil { fmt.Printf("%+v\n", response) } else { fmt.Println(err) } } |
For SDK sample code of more programming languages, see the Sample Code tab in API Explorer. SDK sample code can be automatically generated.
Status Codes
Status Code |
Description |
---|---|
200 |
ok |
Error Codes
See Error Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot