查询Smart Connector任务详情
功能介绍
查询Smart Connector任务详情。
调用方法
请参见如何调用API。
URI
GET /v2/{project_id}/instances/{instance_id}/connector/tasks/{task_id}
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
project_id |
是 |
String |
项目ID,获取方式请参见获取项目ID。 |
instance_id |
是 |
String |
实例ID。 |
task_id |
是 |
String |
Smart Connector任务ID。 |
请求参数
无
响应参数
状态码: 200
参数 |
参数类型 |
描述 |
---|---|---|
task_name |
String |
SmartConnect任务名称。 |
topics |
String |
SmartConnect任务配置的Topic。 |
topics_regex |
String |
SmartConnect任务配置的Topic正则表达式。 |
source_type |
String |
SmartConnect任务的源端类型。 |
source_task |
SmartConnect任务的源端配置。 |
|
sink_type |
String |
SmartConnect任务的目标端类型。 |
sink_task |
SmartConnect任务的目标端配置。 |
|
id |
String |
SmartConnect任务的id。 |
status |
String |
SmartConnect任务的状态。 |
create_time |
Long |
SmartConnect任务的创建时间。 |
参数 |
参数类型 |
描述 |
---|---|---|
redis_address |
String |
Redis实例地址。(仅源端类型为Redis时会显示) |
redis_type |
String |
Redis实例类型。(仅源端类型为Redis时会显示) |
dcs_instance_id |
String |
DCS实例ID。(仅源端类型为Redis时会显示) |
sync_mode |
String |
同步类型,“RDB_ONLY”为全量同步,“CUSTOM_OFFSET”为全量同步+增量同步。(仅源端类型为Redis时会显示) |
full_sync_wait_ms |
Integer |
全量同步重试间隔时间,单位:毫秒。(仅源端类型为Redis时会显示) |
full_sync_max_retry |
Integer |
全量同步最大重试次数。(仅源端类型为Redis时会显示) |
ratelimit |
Integer |
限速,单位为KB/s。-1表示不限速(仅源端类型为Redis时会显示) |
current_cluster_name |
String |
当前Kafka实例别名。(仅源端类型为Kafka时会显示) |
cluster_name |
String |
对端Kafka实例别名。(仅源端类型为Kafka时会显示) |
user_name |
String |
对端Kafka用户名。(仅源端类型为Kafka时会显示) |
sasl_mechanism |
String |
对端Kafka认证机制。(仅源端类型为Kafka时会显示) |
instance_id |
String |
对端Kafka实例ID。(仅源端类型为Kafka时会显示) |
bootstrap_servers |
String |
对端Kafka实例地址。(仅源端类型为Kafka时会显示) |
security_protocol |
String |
对端Kafka认证方式。(仅源端类型为Kafka时会显示) |
direction |
String |
同步方向。(仅源端类型为Kafka时会显示) |
sync_consumer_offsets_enabled |
Boolean |
是否同步消费进度。(仅源端类型为Kafka时会显示) |
replication_factor |
Integer |
副本数。(仅源端类型为Kafka时会显示) |
task_num |
Integer |
任务数。(仅源端类型为Kafka时会显示) |
rename_topic_enabled |
Boolean |
是否重命名Topic。(仅源端类型为Kafka时会显示) |
provenance_header_enabled |
Boolean |
是否添加来源header。(仅源端类型为Kafka时会显示) |
consumer_strategy |
String |
启动偏移量,latest为获取最新的数据,earliest为获取最早的数据。(仅源端类型为Kafka时会显示) |
compression_type |
String |
压缩算法。(仅源端类型为Kafka时会显示) |
topics_mapping |
String |
topic映射。(仅源端类型为Kafka时会显示) |
参数 |
参数类型 |
描述 |
---|---|---|
redis_address |
String |
Redis实例地址。(仅目标端类型为Redis时会显示) |
redis_type |
String |
Redis实例类型。(仅目标端类型为Redis时会显示) |
dcs_instance_id |
String |
DCS实例ID。(仅目标端类型为Redis时会显示) |
target_db |
Integer |
目标数据库,默认为-1。(仅目标端类型为Redis时会显示) |
consumer_strategy |
String |
转储启动偏移量,latest为获取最新的数据,earliest为获取最早的数据。(仅目标端类型为OBS时会显示) |
destination_file_type |
String |
转储文件格式。当前只支持TEXT。(仅目标端类型为OBS时会显示) |
deliver_time_interval |
Integer |
记数据转储周期(秒)。(仅目标端类型为OBS时会显示) |
obs_bucket_name |
String |
转储地址。(仅目标端类型为OBS时会显示) |
obs_path |
String |
转储目录。(仅目标端类型为OBS时会显示) |
partition_format |
String |
时间目录格式。(仅目标端类型为OBS时会显示) |
record_delimiter |
String |
记录分行符。(仅目标端类型为OBS时会显示) |
store_keys |
Boolean |
存储Key。(仅目标端类型为OBS时会显示) |
obs_part_size |
Integer |
每个传输文件多大后就开始上传,单位为byte;默认值5242880。(仅目标端类型为OBS时会显示) |
flush_size |
Integer |
flush_size。(仅目标端类型为OBS时会显示) |
timezone |
String |
时区。(仅目标端类型为OBS时会显示) |
schema_generator_class |
String |
schema_generator类,默认为"io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator"。(仅目标端类型为OBS时会显示) |
partitioner_class |
String |
partitioner类,默认"io.confluent.connect.storage.partitioner.TimeBasedPartitioner"。(仅目标端类型为OBS时会显示) |
value_converter |
String |
value_converter,默认为"org.apache.kafka.connect.converters.ByteArrayConverter"。(仅目标端类型为OBS时会显示) |
key_converter |
String |
key_converter,默认为"org.apache.kafka.connect.converters.ByteArrayConverter"。(仅目标端类型为OBS时会显示) |
kv_delimiter |
String |
kv_delimiter,默认为":"。(仅目标端类型为OBS时会显示) |
请求示例
无
响应示例
状态码: 200
查询Smart Connector任务详情成功。
{ "task_name" : "smart-connect-121248117", "topics" : "topic-sc", "source_task" : { "redis_address" : "192.168.91.179:6379", "redis_type" : "standalone", "dcs_instance_id" : "949190a2-598a-4afd-99a8-dad3cae1e7cd", "sync_mode" : "RDB_ONLY,", "full_sync_wait_ms" : 13000, "full_sync_max_retry" : 4, "ratelimit" : -1 }, "source_type" : "REDIS_REPLICATOR_SOURCE", "sink_task" : { "redis_address" : "192.168.119.51:6379", "redis_type" : "standalone", "dcs_instance_id" : "9b981368-a8e3-416a-87d9-1581a968b41b", "target_db" : -1 }, "sink_type" : "REDIS_REPLICATOR_SINK", "id" : "8a205bbd-7181-4b5e-9bd6-37274ce84577", "status" : "RUNNING", "create_time" : 1708427753133 }
SDK代码示例
SDK代码示例如下。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
package com.huaweicloud.sdk.test; import com.huaweicloud.sdk.core.auth.ICredential; import com.huaweicloud.sdk.core.auth.BasicCredentials; import com.huaweicloud.sdk.core.exception.ConnectionException; import com.huaweicloud.sdk.core.exception.RequestTimeoutException; import com.huaweicloud.sdk.core.exception.ServiceResponseException; import com.huaweicloud.sdk.kafka.v2.region.KafkaRegion; import com.huaweicloud.sdk.kafka.v2.*; import com.huaweicloud.sdk.kafka.v2.model.*; public class ShowConnectorTaskSolution { public static void main(String[] args) { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment String ak = System.getenv("CLOUD_SDK_AK"); String sk = System.getenv("CLOUD_SDK_SK"); String projectId = "{project_id}"; ICredential auth = new BasicCredentials() .withProjectId(projectId) .withAk(ak) .withSk(sk); KafkaClient client = KafkaClient.newBuilder() .withCredential(auth) .withRegion(KafkaRegion.valueOf("<YOUR REGION>")) .build(); ShowConnectorTaskRequest request = new ShowConnectorTaskRequest(); request.withInstanceId("{instance_id}"); request.withTaskId("{task_id}"); try { ShowConnectorTaskResponse response = client.showConnectorTask(request); System.out.println(response.toString()); } catch (ConnectionException e) { e.printStackTrace(); } catch (RequestTimeoutException e) { e.printStackTrace(); } catch (ServiceResponseException e) { e.printStackTrace(); System.out.println(e.getHttpStatusCode()); System.out.println(e.getRequestId()); System.out.println(e.getErrorCode()); System.out.println(e.getErrorMsg()); } } } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
# coding: utf-8 import os from huaweicloudsdkcore.auth.credentials import BasicCredentials from huaweicloudsdkkafka.v2.region.kafka_region import KafkaRegion from huaweicloudsdkcore.exceptions import exceptions from huaweicloudsdkkafka.v2 import * if __name__ == "__main__": # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak = os.environ["CLOUD_SDK_AK"] sk = os.environ["CLOUD_SDK_SK"] projectId = "{project_id}" credentials = BasicCredentials(ak, sk, projectId) client = KafkaClient.new_builder() \ .with_credentials(credentials) \ .with_region(KafkaRegion.value_of("<YOUR REGION>")) \ .build() try: request = ShowConnectorTaskRequest() request.instance_id = "{instance_id}" request.task_id = "{task_id}" response = client.show_connector_task(request) print(response) except exceptions.ClientRequestException as e: print(e.status_code) print(e.request_id) print(e.error_code) print(e.error_msg) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
package main import ( "fmt" "github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic" kafka "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2" "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/model" region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/region" ) func main() { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak := os.Getenv("CLOUD_SDK_AK") sk := os.Getenv("CLOUD_SDK_SK") projectId := "{project_id}" auth := basic.NewCredentialsBuilder(). WithAk(ak). WithSk(sk). WithProjectId(projectId). Build() client := kafka.NewKafkaClient( kafka.KafkaClientBuilder(). WithRegion(region.ValueOf("<YOUR REGION>")). WithCredential(auth). Build()) request := &model.ShowConnectorTaskRequest{} request.InstanceId = "{instance_id}" request.TaskId = "{task_id}" response, err := client.ShowConnectorTask(request) if err == nil { fmt.Printf("%+v\n", response) } else { fmt.Println(err) } } |
更多编程语言的SDK代码示例,请参见API Explorer的代码示例页签,可生成自动对应的SDK代码示例。
状态码
状态码 |
描述 |
---|---|
200 |
查询Smart Connector任务详情成功。 |
错误码
请参见错误码。