Creating and Executing a Job in a Random Cluster
Function
This API is used to create and execute a job in a random cluster.
Calling Method
For details, see Calling APIs.
URI
POST /v1.1/{project_id}/clusters/job
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
project_id |
Yes |
String |
Project ID. For details about how to obtain it, see Project ID and Account ID. |
Request Parameters
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
X-Auth-Token |
Yes |
String |
User token. It can be obtained by calling the IAM API (value of X-Subject-Token in the response header). |
X-Language |
Yes |
String |
Request language |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
jobs |
Yes |
Array of Job objects |
Job list. For details, see the descriptions of jobs parameters. |
clusters |
Yes |
Array of strings |
IDs of CDM clusters. The system selects a random cluster in running state from the specified clusters and creates and executes a migration job in the cluster. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
job_type |
Yes |
String |
Job type
|
from-connector-name |
Yes |
String |
Source link type. The corresponding link parameters are as follows:
|
to-config-values |
Yes |
ConfigValues object |
Destination link parameters, which vary depending on the destination. For details, see Destination Job Parameters. |
to-link-name |
Yes |
String |
Name of the destination link, that is, the name of the link created through the API used to create a link |
driver-config-values |
Yes |
ConfigValues object |
Job parameters, such as Retry upon Failure and Concurrent Extractors. For details, see Job Parameter Description. |
from-config-values |
Yes |
ConfigValues object |
Source link parameters, which vary depending on the source. For details, see Source Job Parameters. |
to-connector-name |
Yes |
String |
Destination link type. The corresponding link parameters are as follows:
|
name |
Yes |
String |
Job name, which contains 1 to 240 characters |
from-link-name |
Yes |
String |
Name of the source link, that is, the name of the link created through the API used to create a link |
creation-user |
No |
String |
User who created the job. The value is generated by the system. |
creation-date |
No |
Long |
Time when the job was created, accurate to millisecond. The value is generated by the system. |
update-date |
No |
Long |
Time when the job was last updated, accurate to millisecond. The value is generated by the system. |
is_incre_job |
No |
Boolean |
Whether the job is an incremental job. This parameter has been discarded. |
flag |
No |
Integer |
Whether the job is a scheduled job. If yes, the value is 1. Otherwise, the value is 0. The value is generated by the system based on the scheduled task configuration. |
files_read |
No |
Integer |
Number of read files. The value is generated by the system. |
update-user |
No |
String |
User who last updated the job. The value is generated by the system. |
external_id |
No |
String |
ID of the job to be executed. For a local job, the value is in the format of job_local1202051771_0002 . For a DLI job, the value is the DLI job ID, for example, **"12345"**. The value is generated by the system and does not need to be set. |
type |
No |
String |
Job type. The value of this parameter is the same as that of job_type. The options are as follows:
|
execute_start_date |
No |
Long |
Time when the last task was started, accurate to millisecond. The value is generated by the system. |
delete_rows |
No |
Integer |
Number of rows deleted by an incremental job. This parameter is deprecated. |
enabled |
No |
Boolean |
Whether the link is enabled. The value is generated by the system. |
bytes_written |
No |
Long |
Number of bytes written by the job. The value is generated by the system. |
id |
No |
Integer |
Job ID, which is generated by the system |
is_use_sql |
No |
Boolean |
Whether to use SQL statements. The value is generated by the system based on whether SQL statements are used for source data extraction. You do not need to set this parameter. |
update_rows |
No |
Integer |
Number of updated rows in an incremental job. This parameter is deprecated. |
group_name |
No |
String |
Group name |
bytes_read |
No |
Long |
Number of bytes read by the job. The value is generated by the system. |
execute_update_date |
No |
Long |
Time when the last task was updated, accurate to millisecond. The value is generated by the system. |
write_rows |
No |
Integer |
Number of rows written by an incremental job. This parameter is deprecated. |
rows_written |
No |
Integer |
Number of rows written by the job. The value is generated by the system. |
rows_read |
No |
Long |
Number of rows read by the job. The value is generated by the system. |
files_written |
No |
Integer |
Number of written files. The value is generated by the system. |
is_incrementing |
No |
Boolean |
Whether the job is an incremental job. Similar to parameter is_incre_job, this parameter is deprecated. |
execute_create_date |
No |
Long |
Time when the last task was created, accurate to millisecond. The value is generated by the system. |
status |
No |
String |
Job execution status
|
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
configs |
Yes |
Array of configs objects |
The data structures of source link parameters, destination link parameters, and job parameters are the same. However, the inputs parameter varies. For details, see the descriptions of configs parameters. |
extended-configs |
No |
extended-configs object |
Extended configuration. For details, see the descriptions of extended-configs parameters. The extended configuration is not open to external systems. You do not need to set it. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
inputs |
Yes |
Array of Input objects |
Input parameter list. Each element in the list is in name,value format. For details, see the descriptions of inputs parameters. In the from-config-values data structure, the value of this parameter varies with the source link type. For details, see section "Source Job Parameters" in the Cloud Data Migration User Guide. In the to-config-values data structure, the value of this parameter varies with the destination link type. For details, see section "Destination Job Parameters" in the Cloud Data Migration User Guide. For details about the inputs parameter in the driver-config-values data structure, see the job parameter descriptions. |
name |
Yes |
String |
Configuration name. The value is fromJobConfig for a source job, toJobConfig for a destination job, and linkConfig for a link. |
id |
No |
Integer |
Configuration ID, which is generated by the system. You do not need to set this parameter. |
type |
No |
String |
Configuration type, which is generated by the system. You do not need to set this parameter. The value can be LINK (for link management APIs) or JOB (for job management APIs). |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
Yes |
String |
Parameter name.
|
value |
Yes |
Object |
Parameter value, which must be a string. |
type |
No |
String |
Value type, such as STRING and INTEGER. The value is set by the system. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
No |
String |
Extended configuration name. This parameter is unavailable for external systems and does not need to be set. |
value |
No |
String |
Extended configuration value. This parameter is unavailable for external systems and does not need to be set. |
Response Parameters
Status code: 200
Parameter |
Type |
Description |
---|---|---|
submissions |
Array of StartJobSubmission objects |
Job running information. For details, see the descriptions of submission parameters. |
Parameter |
Type |
Description |
---|---|---|
isIncrementing |
Boolean |
Whether the job migrates incremental data |
delete_rows |
Integer |
Number of deleted rows |
update_rows |
Integer |
Number of updated rows |
write_rows |
Integer |
Number of written rows |
submission-id |
Integer |
Job submission ID |
job-name |
String |
Job name |
creation-user |
String |
User who started the job |
creation-date |
Long |
Job creation time, accurate to millisecond |
execute-date |
Long |
Execution time |
progress |
Float |
Job progress. If a job fails, the value is -1. Otherwise, the value ranges from 0 to 100. |
status |
String |
Job status
|
isStopingIncrement |
String |
Whether to stop incremental data migration |
is-execute-auto |
Boolean |
Whether to execute the job as scheduled |
last-update-date |
Long |
Time when the job was last updated |
last-udpate-user |
String |
User who last updated the job status |
isDeleteJob |
Boolean |
Whether to delete the job after it is executed |
Example Requests
Randomly selecting a CDM cluster and creating a job named es_css to migrate tables from Elasticsearch to DIS
POST /v1.1/1551c7f6c808414d8e9f3c514a170f2e/clusters/job { "jobs" : [ { "job_type" : "NORMAL_JOB", "from-connector-name" : "elasticsearch-connector", "to-config-values" : { "configs" : [ { "inputs" : [ { "name" : "toJobConfig.streamName", "value" : "dis-lkGm" }, { "name" : "toJobConfig.separator", "value" : "|" }, { "name" : "toJobConfig.columnList", "value" : "1&2&3" } ], "name" : "toJobConfig" } ] }, "to-link-name" : "dis", "driver-config-values" : { "configs" : [ { "inputs" : [ { "name" : "throttlingConfig.numExtractors", "value" : "1" }, { "name" : "throttlingConfig.submitToCluster", "value" : "false" }, { "name" : "throttlingConfig.numLoaders", "value" : "1" }, { "name" : "throttlingConfig.recordDirtyData", "value" : "false" } ], "name" : "throttlingConfig" }, { "inputs" : [ ], "name" : "jarConfig" }, { "inputs" : [ { "name" : "schedulerConfig.isSchedulerJob", "value" : "false" }, { "name" : "schedulerConfig.disposableType", "value" : "NONE" } ], "name" : "schedulerConfig" }, { "inputs" : [ ], "name" : "transformConfig" }, { "inputs" : [ { "name" : "retryJobConfig.retryJobType", "value" : "NONE" } ], "name" : "retryJobConfig" } ] }, "from-config-values" : { "configs" : [ { "inputs" : [ { "name" : "fromJobConfig.index", "value" : "52est" }, { "name" : "fromJobConfig.type", "value" : "est_array" }, { "name" : "fromJobConfig.columnList", "value" : "array_f1_int:long&array_f2_text:string&array_f3_object:nested" }, { "name" : "fromJobConfig.splitNestedField", "value" : "false" } ], "name" : "fromJobConfig" } ] }, "to-connector-name" : "dis-connector", "name" : "es_css", "from-link-name" : "css" } ], "clusters" : [ "b0791496-e111-4e75-b7ca-9277aeab9297", "c2db1191-eb6c-464a-a0d3-b434e6c6df26", "c2db1191-eb6c-464a-a0d3-b434e6c6df26" ] }
Example Responses
Status code: 200
Request succeeded.
{ "submissions" : [ { "isIncrementing" : false, "job-name" : "obs2obs-03", "submission-id" : 13, "isStopingIncrement" : "", "last-update-date" : 1635909057030, "is-execute-auto" : false, "delete_rows" : 0, "write_rows" : 0, "isDeleteJob" : false, "creation-user" : "cdmUser", "progress" : 0, "creation-date" : 1635909057030, "update_rows" : 0, "status" : "PENDING", "execute-date" : 1635909057030 } ] }
SDK Sample Code
The SDK sample code is as follows.
Randomly selecting a CDM cluster and creating a job named es_css to migrate tables from Elasticsearch to DIS
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 |
package com.huaweicloud.sdk.test; import com.huaweicloud.sdk.core.auth.ICredential; import com.huaweicloud.sdk.core.auth.BasicCredentials; import com.huaweicloud.sdk.core.exception.ConnectionException; import com.huaweicloud.sdk.core.exception.RequestTimeoutException; import com.huaweicloud.sdk.core.exception.ServiceResponseException; import com.huaweicloud.sdk.cdm.v1.region.CdmRegion; import com.huaweicloud.sdk.cdm.v1.*; import com.huaweicloud.sdk.cdm.v1.model.*; import java.util.List; import java.util.ArrayList; public class CreateAndStartRandomClusterJobSolution { public static void main(String[] args) { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment String ak = System.getenv("CLOUD_SDK_AK"); String sk = System.getenv("CLOUD_SDK_SK"); String projectId = "{project_id}"; ICredential auth = new BasicCredentials() .withProjectId(projectId) .withAk(ak) .withSk(sk); CdmClient client = CdmClient.newBuilder() .withCredential(auth) .withRegion(CdmRegion.valueOf("<YOUR REGION>")) .build(); CreateAndStartRandomClusterJobRequest request = new CreateAndStartRandomClusterJobRequest(); CdmRandomCreateAndStartJobJsonReq body = new CdmRandomCreateAndStartJobJsonReq(); List<String> listbodyClusters = new ArrayList<>(); listbodyClusters.add("b0791496-e111-4e75-b7ca-9277aeab9297"); listbodyClusters.add("c2db1191-eb6c-464a-a0d3-b434e6c6df26"); listbodyClusters.add("c2db1191-eb6c-464a-a0d3-b434e6c6df26"); List<Input> listConfigsInputs = new ArrayList<>(); listConfigsInputs.add( new Input() .withName("fromJobConfig.index") .withValue("52est") ); listConfigsInputs.add( new Input() .withName("fromJobConfig.type") .withValue("est_array") ); listConfigsInputs.add( new Input() .withName("fromJobConfig.columnList") .withValue("array_f1_int:long&array_f2_text:string&array_f3_object:nested") ); listConfigsInputs.add( new Input() .withName("fromJobConfig.splitNestedField") .withValue("false") ); List<Configs> listFromConfigValuesConfigs = new ArrayList<>(); listFromConfigValuesConfigs.add( new Configs() .withInputs(listConfigsInputs) .withName("fromJobConfig") ); ConfigValues fromconfigvaluesJobs = new ConfigValues(); fromconfigvaluesJobs.withConfigs(listFromConfigValuesConfigs); List<Input> listConfigsInputs1 = new ArrayList<>(); listConfigsInputs1.add( new Input() .withName("retryJobConfig.retryJobType") .withValue("NONE") ); List<Input> listConfigsInputs2 = new ArrayList<>(); listConfigsInputs2.add( new Input() .withName("schedulerConfig.isSchedulerJob") .withValue("false") ); listConfigsInputs2.add( new Input() .withName("schedulerConfig.disposableType") .withValue("NONE") ); List<Input> listConfigsInputs3 = new ArrayList<>(); listConfigsInputs3.add( new Input() .withName("throttlingConfig.numExtractors") .withValue("1") ); listConfigsInputs3.add( new Input() .withName("throttlingConfig.submitToCluster") .withValue("false") ); listConfigsInputs3.add( new Input() .withName("throttlingConfig.numLoaders") .withValue("1") ); listConfigsInputs3.add( new Input() .withName("throttlingConfig.recordDirtyData") .withValue("false") ); List<Configs> listDriverConfigValuesConfigs = new ArrayList<>(); listDriverConfigValuesConfigs.add( new Configs() .withInputs(listConfigsInputs1) .withName("retryJobConfig") ); ConfigValues driverconfigvaluesJobs = new ConfigValues(); driverconfigvaluesJobs.withConfigs(listDriverConfigValuesConfigs); List<Input> listConfigsInputs4 = new ArrayList<>(); listConfigsInputs4.add( new Input() .withName("toJobConfig.streamName") .withValue("dis-lkGm") ); listConfigsInputs4.add( new Input() .withName("toJobConfig.separator") .withValue("|") ); listConfigsInputs4.add( new Input() .withName("toJobConfig.columnList") .withValue("1&2&3") ); List<Configs> listToConfigValuesConfigs = new ArrayList<>(); listToConfigValuesConfigs.add( new Configs() .withInputs(listConfigsInputs4) .withName("toJobConfig") ); ConfigValues toconfigvaluesJobs = new ConfigValues(); toconfigvaluesJobs.withConfigs(listToConfigValuesConfigs); List<Job> listbodyJobs = new ArrayList<>(); listbodyJobs.add( new Job() .withJobType(Job.JobTypeEnum.fromValue("NORMAL_JOB")) .withFromConnectorName("elasticsearch-connector") .withToConfigValues(toconfigvaluesJobs) .withToLinkName("dis") .withDriverConfigValues(driverconfigvaluesJobs) .withFromConfigValues(fromconfigvaluesJobs) .withToConnectorName("dis-connector") .withName("es_css") .withFromLinkName("css") ); body.withClusters(listbodyClusters); body.withJobs(listbodyJobs); request.withBody(body); try { CreateAndStartRandomClusterJobResponse response = client.createAndStartRandomClusterJob(request); System.out.println(response.toString()); } catch (ConnectionException e) { e.printStackTrace(); } catch (RequestTimeoutException e) { e.printStackTrace(); } catch (ServiceResponseException e) { e.printStackTrace(); System.out.println(e.getHttpStatusCode()); System.out.println(e.getRequestId()); System.out.println(e.getErrorCode()); System.out.println(e.getErrorMsg()); } } } |
Randomly selecting a CDM cluster and creating a job named es_css to migrate tables from Elasticsearch to DIS
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
# coding: utf-8 import os from huaweicloudsdkcore.auth.credentials import BasicCredentials from huaweicloudsdkcdm.v1.region.cdm_region import CdmRegion from huaweicloudsdkcore.exceptions import exceptions from huaweicloudsdkcdm.v1 import * if __name__ == "__main__": # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak = os.environ["CLOUD_SDK_AK"] sk = os.environ["CLOUD_SDK_SK"] projectId = "{project_id}" credentials = BasicCredentials(ak, sk, projectId) client = CdmClient.new_builder() \ .with_credentials(credentials) \ .with_region(CdmRegion.value_of("<YOUR REGION>")) \ .build() try: request = CreateAndStartRandomClusterJobRequest() listClustersbody = [ "b0791496-e111-4e75-b7ca-9277aeab9297", "c2db1191-eb6c-464a-a0d3-b434e6c6df26", "c2db1191-eb6c-464a-a0d3-b434e6c6df26" ] listInputsConfigs = [ Input( name="fromJobConfig.index", value="52est" ), Input( name="fromJobConfig.type", value="est_array" ), Input( name="fromJobConfig.columnList", value="array_f1_int:long&array_f2_text:string&array_f3_object:nested" ), Input( name="fromJobConfig.splitNestedField", value="false" ) ] listConfigsFromconfigvalues = [ Configs( inputs=listInputsConfigs, name="fromJobConfig" ) ] fromconfigvaluesJobs = ConfigValues( configs=listConfigsFromconfigvalues ) listInputsConfigs1 = [ Input( name="retryJobConfig.retryJobType", value="NONE" ) ] listInputsConfigs2 = [ Input( name="schedulerConfig.isSchedulerJob", value="false" ), Input( name="schedulerConfig.disposableType", value="NONE" ) ] listInputsConfigs3 = [ Input( name="throttlingConfig.numExtractors", value="1" ), Input( name="throttlingConfig.submitToCluster", value="false" ), Input( name="throttlingConfig.numLoaders", value="1" ), Input( name="throttlingConfig.recordDirtyData", value="false" ) ] listConfigsDriverconfigvalues = [ Configs( inputs=listInputsConfigs1, name="retryJobConfig" ) ] driverconfigvaluesJobs = ConfigValues( configs=listConfigsDriverconfigvalues ) listInputsConfigs4 = [ Input( name="toJobConfig.streamName", value="dis-lkGm" ), Input( name="toJobConfig.separator", value="|" ), Input( name="toJobConfig.columnList", value="1&2&3" ) ] listConfigsToconfigvalues = [ Configs( inputs=listInputsConfigs4, name="toJobConfig" ) ] toconfigvaluesJobs = ConfigValues( configs=listConfigsToconfigvalues ) listJobsbody = [ Job( job_type="NORMAL_JOB", from_connector_name="elasticsearch-connector", to_config_values=toconfigvaluesJobs, to_link_name="dis", driver_config_values=driverconfigvaluesJobs, from_config_values=fromconfigvaluesJobs, to_connector_name="dis-connector", name="es_css", from_link_name="css" ) ] request.body = CdmRandomCreateAndStartJobJsonReq( clusters=listClustersbody, jobs=listJobsbody ) response = client.create_and_start_random_cluster_job(request) print(response) except exceptions.ClientRequestException as e: print(e.status_code) print(e.request_id) print(e.error_code) print(e.error_msg) |
Randomly selecting a CDM cluster and creating a job named es_css to migrate tables from Elasticsearch to DIS
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
package main import ( "fmt" "github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic" cdm "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/cdm/v1" "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/cdm/v1/model" region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/cdm/v1/region" ) func main() { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak := os.Getenv("CLOUD_SDK_AK") sk := os.Getenv("CLOUD_SDK_SK") projectId := "{project_id}" auth := basic.NewCredentialsBuilder(). WithAk(ak). WithSk(sk). WithProjectId(projectId). Build() client := cdm.NewCdmClient( cdm.CdmClientBuilder(). WithRegion(region.ValueOf("<YOUR REGION>")). WithCredential(auth). Build()) request := &model.CreateAndStartRandomClusterJobRequest{} var listClustersbody = []string{ "b0791496-e111-4e75-b7ca-9277aeab9297", "c2db1191-eb6c-464a-a0d3-b434e6c6df26", "c2db1191-eb6c-464a-a0d3-b434e6c6df26", } valueInputs:= "52est" var valueInputsInterface interface{} = valueInputs valueInputs1:= "est_array" var valueInputsInterface1 interface{} = valueInputs1 valueInputs2:= "array_f1_int:long&array_f2_text:string&array_f3_object:nested" var valueInputsInterface2 interface{} = valueInputs2 valueInputs3:= "false" var valueInputsInterface3 interface{} = valueInputs3 var listInputsConfigs = []model.Input{ { Name: "fromJobConfig.index", Value: &valueInputsInterface, }, { Name: "fromJobConfig.type", Value: &valueInputsInterface1, }, { Name: "fromJobConfig.columnList", Value: &valueInputsInterface2, }, { Name: "fromJobConfig.splitNestedField", Value: &valueInputsInterface3, }, } var listConfigsFromConfigValues = []model.Configs{ { Inputs: listInputsConfigs, Name: "fromJobConfig", }, } fromconfigvaluesJobs := &model.ConfigValues{ Configs: listConfigsFromConfigValues, } valueInputs4:= "NONE" var valueInputsInterface4 interface{} = valueInputs4 var listInputsConfigs1 = []model.Input{ { Name: "retryJobConfig.retryJobType", Value: &valueInputsInterface4, }, } valueInputs5:= "false" var valueInputsInterface5 interface{} = valueInputs5 valueInputs6:= "NONE" var valueInputsInterface6 interface{} = valueInputs6 var listInputsConfigs2 = []model.Input{ { Name: "schedulerConfig.isSchedulerJob", Value: &valueInputsInterface5, }, { Name: "schedulerConfig.disposableType", Value: &valueInputsInterface6, }, } valueInputs7:= "1" var valueInputsInterface7 interface{} = valueInputs7 valueInputs8:= "false" var valueInputsInterface8 interface{} = valueInputs8 valueInputs9:= "1" var valueInputsInterface9 interface{} = valueInputs9 valueInputs10:= "false" var valueInputsInterface10 interface{} = valueInputs10 var listInputsConfigs3 = []model.Input{ { Name: "throttlingConfig.numExtractors", Value: &valueInputsInterface7, }, { Name: "throttlingConfig.submitToCluster", Value: &valueInputsInterface8, }, { Name: "throttlingConfig.numLoaders", Value: &valueInputsInterface9, }, { Name: "throttlingConfig.recordDirtyData", Value: &valueInputsInterface10, }, } var listConfigsDriverConfigValues = []model.Configs{ { Inputs: listInputsConfigs1, Name: "retryJobConfig", }, } driverconfigvaluesJobs := &model.ConfigValues{ Configs: listConfigsDriverConfigValues, } valueInputs11:= "dis-lkGm" var valueInputsInterface11 interface{} = valueInputs11 valueInputs12:= "|" var valueInputsInterface12 interface{} = valueInputs12 valueInputs13:= "1&2&3" var valueInputsInterface13 interface{} = valueInputs13 var listInputsConfigs4 = []model.Input{ { Name: "toJobConfig.streamName", Value: &valueInputsInterface11, }, { Name: "toJobConfig.separator", Value: &valueInputsInterface12, }, { Name: "toJobConfig.columnList", Value: &valueInputsInterface13, }, } var listConfigsToConfigValues = []model.Configs{ { Inputs: listInputsConfigs4, Name: "toJobConfig", }, } toconfigvaluesJobs := &model.ConfigValues{ Configs: listConfigsToConfigValues, } var listJobsbody = []model.Job{ { JobType: model.GetJobJobTypeEnum().NORMAL_JOB, FromConnectorName: "elasticsearch-connector", ToConfigValues: toconfigvaluesJobs, ToLinkName: "dis", DriverConfigValues: driverconfigvaluesJobs, FromConfigValues: fromconfigvaluesJobs, ToConnectorName: "dis-connector", Name: "es_css", FromLinkName: "css", }, } request.Body = &model.CdmRandomCreateAndStartJobJsonReq{ Clusters: listClustersbody, Jobs: listJobsbody, } response, err := client.CreateAndStartRandomClusterJob(request) if err == nil { fmt.Printf("%+v\n", response) } else { fmt.Println(err) } } |
For SDK sample code of more programming languages, see the Sample Code tab in API Explorer. SDK sample code can be automatically generated.
Status Codes
Status Code |
Description |
---|---|
200 |
Request succeeded. |
Error Codes
See Error Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot