Creating a Job in a Specified Cluster
Function
This API is used to create a job in a specified cluster.
Calling Method
For details, see Calling APIs.
URI
POST /v1.1/{project_id}/clusters/{cluster_id}/cdm/job
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
project_id |
Yes |
String |
Project ID. For details about how to obtain the project ID, see Project ID and Account ID. |
cluster_id |
Yes |
String |
Cluster ID |
Request Parameters
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
X-Auth-Token |
Yes |
String |
User token. It can be obtained by calling the IAM API (value of X-Subject-Token in the response header). |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
jobs |
Yes |
Array of Job objects |
Job list. For details, see the descriptions of jobs parameters. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
job_type |
Yes |
String |
Job type
Enumeration values:
|
from-connector-name |
Yes |
String |
Source link type. The available values are as follows: generic-jdbc-connector: link to a relational database obs-connector: link to OBS hdfs-connector: link to HDFS hbase-connector: link to HBase or CloudTable hive-connector: link to Hive ftp-connector/sftp-connector: link to an FTP or SFTP server mongodb-connector: link to MongoDB redis-connector: link to Redis or DCS kafka-connector: link to Kafka dis-connector: link to DIS elasticsearch-connector: link to Elasticsearch or Cloud Search Service (CSS) dli-connector: link to DLI. http-connector: link to an HTTP or HTTPS server (No link parameters are required.) dms-kafka-connector: link to DMS for Kafka |
to-config-values |
Yes |
ConfigValues object |
Destination link parameters, which vary depending on the destination. For details, see Destination Job Parameters. |
to-link-name |
Yes |
String |
Name of the destination link, that is, the name of the link created through the API used to create a link |
driver-config-values |
Yes |
ConfigValues object |
Job parameters, such as Retry upon Failure and Concurrent Extractors. For details, see Job Parameter Description. |
from-config-values |
Yes |
ConfigValues object |
Source link parameters, which vary depending on the source. For details, see Source Job Parameters. |
to-connector-name |
Yes |
String |
Destination link type. The available values are as follows: generic-jdbc-connector: link to a relational database obs-connector: link to OBS hdfs-connector: link to HDFS hbase-connector: link to HBase or CloudTable hive-connector: link to Hive ftp-connector/sftp-connector: link to an FTP or SFTP server mongodb-connector: link to MongoDB redis-connector: link to Redis or DCS kafka-connector: link to Kafka dis-connector: link to DIS elasticsearch-connector: link to Elasticsearch or Cloud Search Service (CSS) dli-connector: link to DLI. http-connector: link to an HTTP or HTTPS server (No link parameters are required.) dms-kafka-connector: link to DMS for Kafka |
name |
Yes |
String |
Job name, which contains 1 to 240 characters Minimum: 1 Maximum: 240 |
from-link-name |
Yes |
String |
Name of the source link, that is, the name of the link created through the API used to create a link |
creation-user |
No |
String |
User who created the job. The value is generated by the system. |
creation-date |
No |
Long |
Time when the job was created, accurate to millisecond. The value is generated by the system. |
update-date |
No |
Long |
Time when the job was last updated, accurate to millisecond. The value is generated by the system. |
is_incre_job |
No |
Boolean |
Whether the job is an incremental job. This parameter is deprecated. |
flag |
No |
Integer |
Whether the job is a scheduled job. If yes, the value is 1. Otherwise, the value is 0. The value is generated by the system based on the scheduled task configuration. |
files_read |
No |
Integer |
Number of read files. The value is generated by the system. |
update-user |
No |
String |
User who last updated the job. The value is generated by the system. |
external_id |
No |
String |
ID of the job to be executed. For a local job, the value is in the format of job_local1202051771_0002 . For a DLI job, the value is the DLI job ID, for example, **"12345"**. The value is generated by the system and does not need to be set. |
type |
No |
String |
Job type. The value of this parameter is the same as that of job_type. The options are as follows:
|
execute_start_date |
No |
Long |
Time when the last task was started, accurate to millisecond. The value is generated by the system. |
delete_rows |
No |
Integer |
Number of rows deleted by an incremental job. This parameter is deprecated. |
enabled |
No |
Boolean |
Whether the link is enabled. The value is generated by the system. |
bytes_written |
No |
Long |
Number of bytes written by the job. The value is generated by the system. |
id |
No |
Integer |
Job ID, which is generated by the system |
is_use_sql |
No |
Boolean |
Whether SQL statements are used. The value is generated by the system based on whether SQL statements are used at the source. |
update_rows |
No |
Integer |
Number of updated rows in an incremental job. This parameter is deprecated. |
group_name |
No |
String |
Group name |
bytes_read |
No |
Long |
Number of bytes read by the job. The value is generated by the system. |
execute_update_date |
No |
Long |
Time when the last task was updated, accurate to millisecond. The value is generated by the system. |
write_rows |
No |
Integer |
Number of rows written by an incremental job. This parameter is deprecated. |
rows_written |
No |
Integer |
Number of rows written by the job. The value is generated by the system. |
rows_read |
No |
Long |
Number of rows read by the job. The value is generated by the system. |
files_written |
No |
Integer |
Number of written files. The value is generated by the system. |
is_incrementing |
No |
Boolean |
Whether the job is an incremental job. Similar to parameter is_incre_job, this parameter is deprecated. |
execute_create_date |
No |
Long |
Time when the last task was created, accurate to millisecond. The value is generated by the system. |
status |
No |
String |
Job execution status
|
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
configs |
Yes |
Array of configs objects |
The data structures of source link parameters, destination link parameters, and job parameters are the same. However, the inputs parameter varies. For details, see the descriptions of configs parameters. |
extended-configs |
No |
extended-configs object |
Extended configuration. For details, see the descriptions of extended-configs parameters. The extended configuration is not open to external systems. You do not need to set it. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
inputs |
Yes |
Array of Input objects |
Input parameter list. Each element in the list is in name,value format. For details, see the descriptions of inputs parameters. In the from-config-values data structure, the value of this parameter varies with the source link type. For details, see section "Source Job Parameters" in the Cloud Data Migration User Guide. In the to-config-values data structure, the value of this parameter varies with the destination link type. For details, see section "Destination Job Parameters" in the Cloud Data Migration User Guide. For details about the inputs parameter in the driver-config-values data structure, see the job parameter descriptions. |
name |
Yes |
String |
Configuration name. The value is fromJobConfig for a source job, toJobConfig for a destination job, and linkConfig for a link. |
id |
No |
Integer |
Configuration ID, which is generated by the system. You do not need to set this parameter. |
type |
No |
String |
Configuration type, which is generated by the system. You do not need to set this parameter. The value can be LINK (for link management APIs) or JOB (for job management APIs). |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Parameter name.
|
value |
Yes |
String |
Parameter value, which must be a string. |
type |
No |
String |
Value type, such as STRING and INTEGER. The value is set by the system. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
No |
String |
Extended configuration name. This parameter is unavailable for external systems and does not need to be set. |
value |
No |
String |
Extended configuration value. This parameter is unavailable for external systems and does not need to be set. |
Response Parameters
Status code: 200
Parameter |
Type |
Description |
---|---|---|
name |
String |
Job name |
validation-result |
Array of JobValidationResult objects |
Check result
|
Parameter |
Type |
Description |
---|---|---|
message |
String |
Error message |
status |
String |
ERROR,WARNING Enumeration values:
|
Status code: 400
Parameter |
Type |
Description |
---|---|---|
code |
String |
Return code |
errCode |
String |
Error code |
message |
String |
Error message |
externalMessage |
String |
Additional information |
Example Requests
Creating a data migration job whose name is es_css, source link is an Elasticsearch link, and destination link is a DIS link
POST /v1.1/1551c7f6c808414d8e9f3c514a170f2e/clusters/6ec9a0a4-76be-4262-8697-e7af1fac7920/cdm/job { "jobs" : [ { "job_type" : "NORMAL_JOB", "from-connector-name" : "elasticsearch-connector", "to-config-values" : { "configs" : [ { "inputs" : [ { "name" : "toJobConfig.streamName", "value" : "dis-lkGm" }, { "name" : "toJobConfig.separator", "value" : "|" }, { "name" : "toJobConfig.columnList", "value" : "1&2&3" } ], "name" : "toJobConfig" } ] }, "to-link-name" : "dis", "driver-config-values" : { "configs" : [ { "inputs" : [ { "name" : "throttlingConfig.numExtractors", "value" : "1" }, { "name" : "throttlingConfig.submitToCluster", "value" : "false" }, { "name" : "throttlingConfig.numLoaders", "value" : "1" }, { "name" : "throttlingConfig.recordDirtyData", "value" : "false" } ], "name" : "throttlingConfig" }, { "inputs" : [ ], "name" : "jarConfig" }, { "inputs" : [ { "name" : "schedulerConfig.isSchedulerJob", "value" : "false" }, { "name" : "schedulerConfig.disposableType", "value" : "NONE" } ], "name" : "schedulerConfig" }, { "inputs" : [ ], "name" : "transformConfig" }, { "inputs" : [ { "name" : "retryJobConfig.retryJobType", "value" : "NONE" } ], "name" : "retryJobConfig" } ] }, "from-config-values" : { "configs" : [ { "inputs" : [ { "name" : "fromJobConfig.index", "value" : "52est" }, { "name" : "fromJobConfig.type", "value" : "est_array" }, { "name" : "fromJobConfig.columnList", "value" : "array_f1_int:long&array_f2_text:string&array_f3_object:nested" }, { "name" : "fromJobConfig.splitNestedField", "value" : "false" } ], "name" : "fromJobConfig" } ] }, "to-connector-name" : "dis-connector", "name" : "es_css", "from-link-name" : "css" } ] }
Example Responses
Status code: 200
ok
{ "name" : "mysql2hive" }
Status code: 400
Request error
{ "code" : "Cdm.0104", "errCode" : "Cdm.0104", "message" : "Job name already exist or created by other.", "ternalMessage" : "Job name already exist or created by other." }
SDK Sample Code
The SDK sample code is as follows.
Creating a data migration job whose name is es_css, source link is an Elasticsearch link, and destination link is a DIS link
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
package com.huaweicloud.sdk.test; import com.huaweicloud.sdk.core.auth.ICredential; import com.huaweicloud.sdk.core.auth.BasicCredentials; import com.huaweicloud.sdk.core.exception.ConnectionException; import com.huaweicloud.sdk.core.exception.RequestTimeoutException; import com.huaweicloud.sdk.core.exception.ServiceResponseException; import com.huaweicloud.sdk.cdm.v1.region.cdmRegion; import com.huaweicloud.sdk.cdm.v1.*; import com.huaweicloud.sdk.cdm.v1.model.*; import java.util.List; import java.util.ArrayList; public class CreateJobSolution { public static void main(String[] args) { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment String ak = System.getenv("CLOUD_SDK_AK"); String sk = System.getenv("CLOUD_SDK_SK"); String projectId = "{project_id}"; ICredential auth = new BasicCredentials() .withProjectId(projectId) .withAk(ak) .withSk(sk); cdmClient client = cdmClient.newBuilder() .withCredential(auth) .withRegion(cdmRegion.valueOf("<YOUR REGION>")) .build(); CreateJobRequest request = new CreateJobRequest(); request.withClusterId("{cluster_id}"); CdmCreateJobJsonReq body = new CdmCreateJobJsonReq(); List<Input> listConfigsInputs = new ArrayList<>(); listConfigsInputs.add( new Input() .withName("fromJobConfig.index") .withValue("52est") ); listConfigsInputs.add( new Input() .withName("fromJobConfig.type") .withValue("est_array") ); listConfigsInputs.add( new Input() .withName("fromJobConfig.columnList") .withValue("array_f1_int:long&array_f2_text:string&array_f3_object:nested") ); listConfigsInputs.add( new Input() .withName("fromJobConfig.splitNestedField") .withValue("false") ); List<Configs> listFromConfigValuesConfigs = new ArrayList<>(); listFromConfigValuesConfigs.add( new Configs() .withInputs(listConfigsInputs) .withName("fromJobConfig") ); ConfigValues fromconfigvaluesJobs = new ConfigValues(); fromconfigvaluesJobs.withConfigs(listFromConfigValuesConfigs); List<Input> listConfigsInputs1 = new ArrayList<>(); listConfigsInputs1.add( new Input() .withName("retryJobConfig.retryJobType") .withValue("NONE") ); List<Input> listConfigsInputs2 = new ArrayList<>(); listConfigsInputs2.add( new Input() .withName("schedulerConfig.isSchedulerJob") .withValue("false") ); listConfigsInputs2.add( new Input() .withName("schedulerConfig.disposableType") .withValue("NONE") ); List<Input> listConfigsInputs3 = new ArrayList<>(); listConfigsInputs3.add( new Input() .withName("throttlingConfig.numExtractors") .withValue("1") ); listConfigsInputs3.add( new Input() .withName("throttlingConfig.submitToCluster") .withValue("false") ); listConfigsInputs3.add( new Input() .withName("throttlingConfig.numLoaders") .withValue("1") ); listConfigsInputs3.add( new Input() .withName("throttlingConfig.recordDirtyData") .withValue("false") ); List<Configs> listDriverConfigValuesConfigs = new ArrayList<>(); listDriverConfigValuesConfigs.add( new Configs() .withInputs(listConfigsInputs1) .withName("retryJobConfig") ); ConfigValues driverconfigvaluesJobs = new ConfigValues(); driverconfigvaluesJobs.withConfigs(listDriverConfigValuesConfigs); List<Input> listConfigsInputs4 = new ArrayList<>(); listConfigsInputs4.add( new Input() .withName("toJobConfig.streamName") .withValue("dis-lkGm") ); listConfigsInputs4.add( new Input() .withName("toJobConfig.separator") .withValue("|") ); listConfigsInputs4.add( new Input() .withName("toJobConfig.columnList") .withValue("1&2&3") ); List<Configs> listToConfigValuesConfigs = new ArrayList<>(); listToConfigValuesConfigs.add( new Configs() .withInputs(listConfigsInputs4) .withName("toJobConfig") ); ConfigValues toconfigvaluesJobs = new ConfigValues(); toconfigvaluesJobs.withConfigs(listToConfigValuesConfigs); List<Job> listbodyJobs = new ArrayList<>(); listbodyJobs.add( new Job() .withJobType(Job.JobTypeEnum.fromValue("NORMAL_JOB")) .withFromConnectorName("elasticsearch-connector") .withToConfigValues(toconfigvaluesJobs) .withToLinkName("dis") .withDriverConfigValues(driverconfigvaluesJobs) .withFromConfigValues(fromconfigvaluesJobs) .withToConnectorName("dis-connector") .withName("es_css") .withFromLinkName("css") ); body.withJobs(listbodyJobs); request.withBody(body); try { CreateJobResponse response = client.createJob(request); System.out.println(response.toString()); } catch (ConnectionException e) { e.printStackTrace(); } catch (RequestTimeoutException e) { e.printStackTrace(); } catch (ServiceResponseException e) { e.printStackTrace(); System.out.println(e.getHttpStatusCode()); System.out.println(e.getRequestId()); System.out.println(e.getErrorCode()); System.out.println(e.getErrorMsg()); } } } |
Creating a data migration job whose name is es_css, source link is an Elasticsearch link, and destination link is a DIS link
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 |
# coding: utf-8 from huaweicloudsdkcore.auth.credentials import BasicCredentials from huaweicloudsdkcdm.v1.region.cdm_region import cdmRegion from huaweicloudsdkcore.exceptions import exceptions from huaweicloudsdkcdm.v1 import * if __name__ == "__main__": # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak = __import__('os').getenv("CLOUD_SDK_AK") sk = __import__('os').getenv("CLOUD_SDK_SK") projectId = "{project_id}" credentials = BasicCredentials(ak, sk, projectId) \ client = cdmClient.new_builder() \ .with_credentials(credentials) \ .with_region(cdmRegion.value_of("<YOUR REGION>")) \ .build() try: request = CreateJobRequest() request.cluster_id = "{cluster_id}" listInputsConfigs = [ Input( name="fromJobConfig.index", value="52est" ), Input( name="fromJobConfig.type", value="est_array" ), Input( name="fromJobConfig.columnList", value="array_f1_int:long&array_f2_text:string&array_f3_object:nested" ), Input( name="fromJobConfig.splitNestedField", value="false" ) ] listConfigsFromconfigvalues = [ Configs( inputs=listInputsConfigs, name="fromJobConfig" ) ] fromconfigvaluesJobs = ConfigValues( configs=listConfigsFromconfigvalues ) listInputsConfigs1 = [ Input( name="retryJobConfig.retryJobType", value="NONE" ) ] listInputsConfigs2 = [ Input( name="schedulerConfig.isSchedulerJob", value="false" ), Input( name="schedulerConfig.disposableType", value="NONE" ) ] listInputsConfigs3 = [ Input( name="throttlingConfig.numExtractors", value="1" ), Input( name="throttlingConfig.submitToCluster", value="false" ), Input( name="throttlingConfig.numLoaders", value="1" ), Input( name="throttlingConfig.recordDirtyData", value="false" ) ] listConfigsDriverconfigvalues = [ Configs( inputs=listInputsConfigs1, name="retryJobConfig" ) ] driverconfigvaluesJobs = ConfigValues( configs=listConfigsDriverconfigvalues ) listInputsConfigs4 = [ Input( name="toJobConfig.streamName", value="dis-lkGm" ), Input( name="toJobConfig.separator", value="|" ), Input( name="toJobConfig.columnList", value="1&2&3" ) ] listConfigsToconfigvalues = [ Configs( inputs=listInputsConfigs4, name="toJobConfig" ) ] toconfigvaluesJobs = ConfigValues( configs=listConfigsToconfigvalues ) listJobsbody = [ Job( job_type="NORMAL_JOB", from_connector_name="elasticsearch-connector", to_config_values=toconfigvaluesJobs, to_link_name="dis", driver_config_values=driverconfigvaluesJobs, from_config_values=fromconfigvaluesJobs, to_connector_name="dis-connector", name="es_css", from_link_name="css" ) ] request.body = CdmCreateJobJsonReq( jobs=listJobsbody ) response = client.create_job(request) print(response) except exceptions.ClientRequestException as e: print(e.status_code) print(e.request_id) print(e.error_code) print(e.error_msg) |
Creating a data migration job whose name is es_css, source link is an Elasticsearch link, and destination link is a DIS link
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
package main import ( "fmt" "github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic" cdm "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/cdm/v1" "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/cdm/v1/model" region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/cdm/v1/region" ) func main() { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak := os.Getenv("CLOUD_SDK_AK") sk := os.Getenv("CLOUD_SDK_SK") projectId := "{project_id}" auth := basic.NewCredentialsBuilder(). WithAk(ak). WithSk(sk). WithProjectId(projectId). Build() client := cdm.NewcdmClient( cdm.cdmClientBuilder(). WithRegion(region.ValueOf("<YOUR REGION>")). WithCredential(auth). Build()) request := &model.CreateJobRequest{} request.ClusterId = "{cluster_id}" var listInputsConfigs = []model.Input{ { Name: "fromJobConfig.index", Value: "52est", }, { Name: "fromJobConfig.type", Value: "est_array", }, { Name: "fromJobConfig.columnList", Value: "array_f1_int:long&array_f2_text:string&array_f3_object:nested", }, { Name: "fromJobConfig.splitNestedField", Value: "false", }, } var listConfigsFromConfigValues = []model.Configs{ { Inputs: listInputsConfigs, Name: "fromJobConfig", }, } fromconfigvaluesJobs := &model.ConfigValues{ Configs: listConfigsFromConfigValues, } var listInputsConfigs1 = []model.Input{ { Name: "retryJobConfig.retryJobType", Value: "NONE", }, } var listInputsConfigs2 = []model.Input{ { Name: "schedulerConfig.isSchedulerJob", Value: "false", }, { Name: "schedulerConfig.disposableType", Value: "NONE", }, } var listInputsConfigs3 = []model.Input{ { Name: "throttlingConfig.numExtractors", Value: "1", }, { Name: "throttlingConfig.submitToCluster", Value: "false", }, { Name: "throttlingConfig.numLoaders", Value: "1", }, { Name: "throttlingConfig.recordDirtyData", Value: "false", }, } var listConfigsDriverConfigValues = []model.Configs{ { Inputs: listInputsConfigs1, Name: "retryJobConfig", }, } driverconfigvaluesJobs := &model.ConfigValues{ Configs: listConfigsDriverConfigValues, } var listInputsConfigs4 = []model.Input{ { Name: "toJobConfig.streamName", Value: "dis-lkGm", }, { Name: "toJobConfig.separator", Value: "|", }, { Name: "toJobConfig.columnList", Value: "1&2&3", }, } var listConfigsToConfigValues = []model.Configs{ { Inputs: listInputsConfigs4, Name: "toJobConfig", }, } toconfigvaluesJobs := &model.ConfigValues{ Configs: listConfigsToConfigValues, } var listJobsbody = []model.Job{ { JobType: model.GetJobJobTypeEnum().NORMAL_JOB, FromConnectorName: "elasticsearch-connector", ToConfigValues: toconfigvaluesJobs, ToLinkName: "dis", DriverConfigValues: driverconfigvaluesJobs, FromConfigValues: fromconfigvaluesJobs, ToConnectorName: "dis-connector", Name: "es_css", FromLinkName: "css", }, } request.Body = &model.CdmCreateJobJsonReq{ Jobs: listJobsbody, } response, err := client.CreateJob(request) if err == nil { fmt.Printf("%+v\n", response) } else { fmt.Println(err) } } |
For SDK sample code of more programming languages, see the Sample Code tab in API Explorer. SDK sample code can be automatically generated.
Status Codes
Status Code |
Description |
---|---|
200 |
ok |
400 |
Request error |
Error Codes
See Error Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot