查询Smart Connect任务列表
功能介绍
查询Smart Connect任务列表。
调用方法
请参见如何调用API。
URI
GET /v2/{project_id}/instances/{instance_id}/connector/tasks
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
project_id |
是 |
String |
项目ID,获取方式请参见获取项目ID。 |
instance_id |
是 |
String |
实例ID。 |
参数 |
是否必选 |
参数类型 |
描述 |
---|---|---|---|
offset |
否 |
Integer |
偏移量,表示从此偏移量开始查询,offset大于等于0。 |
limit |
否 |
Integer |
当次查询返回的最大实例个数,默认值为10,取值范围为1~50。 |
请求参数
无
响应参数
状态码:200
参数 |
参数类型 |
描述 |
---|---|---|
tasks |
Array of SmartConnectTaskEntity objects |
Smart Connect任务详情。 |
total_number |
Integer |
Smart Connect任务数。 |
max_tasks |
Integer |
Smart Connect最大任务数。 |
quota_tasks |
Integer |
Smart Connect任务配额。 |
参数 |
参数类型 |
描述 |
---|---|---|
task_name |
String |
参数解释: Smart Connect任务名称。 取值范围: 不涉及。 |
topics |
String |
参数解释: Smart Connect任务配置的Topic。 取值范围: 不涉及。 |
topics_regex |
String |
参数解释: Smart Connect任务配置的Topic正则表达式。 取值范围: 不涉及。 |
source_type |
String |
参数解释: Smart Connect任务的源端类型。 取值范围:
|
source_task |
参数解释: Smart Connect任务的源端配置。 |
|
sink_type |
String |
参数解释: Smart Connect任务的目标端类型。 取值范围:
|
sink_task |
参数解释: Smart Connect任务的目标端配置。 |
|
id |
String |
参数解释: Smart Connect任务的id。 取值范围: 不涉及。 |
status |
String |
参数解释: Smart Connect任务的状态。 取值范围: 不涉及。 |
create_time |
Long |
参数解释: Smart Connect任务的创建时间。 取值范围: 不涉及。 |
参数 |
参数类型 |
描述 |
---|---|---|
current_cluster_name |
String |
参数解释: 当前Kafka实例别名。(仅源端类型为Kafka时会显示) 取值范围: 不涉及。 |
cluster_name |
String |
参数解释: 对端Kafka实例别名。(仅源端类型为Kafka时会显示) 取值范围: 不涉及。 |
user_name |
String |
参数解释: 对端Kafka用户名。(仅源端类型为Kafka时会显示) 取值范围: 不涉及。 |
sasl_mechanism |
String |
参数解释: 对端Kafka认证机制。(仅源端类型为Kafka时会显示) 取值范围:
|
instance_id |
String |
参数解释: 对端Kafka实例ID。(仅源端类型为Kafka时会显示) 取值范围: 不涉及。 |
bootstrap_servers |
String |
参数解释: 对端Kafka实例地址。(仅源端类型为Kafka时会显示) 取值范围: 不涉及。 |
security_protocol |
String |
参数解释: 对端Kafka认证方式。(仅源端类型为Kafka时会显示) 取值范围:
|
direction |
String |
参数解释: 同步方向。(仅源端类型为Kafka时会显示) 取值范围:
|
sync_consumer_offsets_enabled |
Boolean |
参数解释: 是否同步消费进度。(仅源端类型为Kafka时会显示) 取值范围:
|
replication_factor |
Integer |
参数解释: 副本数。(仅源端类型为Kafka时会显示) 取值范围: 不涉及。 |
task_num |
Integer |
参数解释: 任务数。(仅源端类型为Kafka时会显示) 取值范围: 不涉及。 |
rename_topic_enabled |
Boolean |
参数解释: 是否重命名Topic。(仅源端类型为Kafka时会显示) 取值范围:
|
provenance_header_enabled |
Boolean |
参数解释: 是否添加来源header。(仅源端类型为Kafka时会显示) 取值范围:
|
consumer_strategy |
String |
参数解释: 启动偏移量。(仅源端类型为Kafka时会显示) 取值范围:
|
compression_type |
String |
参数解释: 压缩算法。(仅源端类型为Kafka时会显示) 取值范围:
|
topics_mapping |
String |
参数解释: Topic映射。(仅源端类型为Kafka时会显示) 取值范围: 不涉及。 |
参数 |
参数类型 |
描述 |
---|---|---|
consumer_strategy |
String |
参数解释: 转储启动偏移量。(仅目标端类型为OBS时会显示) 取值范围:
|
destination_file_type |
String |
参数解释: 转储文件格式。当前只支持TEXT。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
deliver_time_interval |
Integer |
参数解释: 数据转储周期(秒)。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
obs_bucket_name |
String |
参数解释: 转储地址。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
obs_path |
String |
参数解释: 转储目录。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
partition_format |
String |
参数解释: 时间目录格式。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
record_delimiter |
String |
参数解释: 记录分行符。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
store_keys |
Boolean |
参数解释: 存储Key。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
obs_part_size |
Integer |
参数解释: 每个传输文件多大后就开始上传,单位为byte;默认值5242880。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
flush_size |
Integer |
参数解释: 刷写数量。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
timezone |
String |
参数解释: 时区。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
schema_generator_class |
String |
参数解释: schema_generator类,默认为"io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator"。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
partitioner_class |
String |
参数解释: partitioner类,默认"io.confluent.connect.storage.partitioner.TimeBasedPartitioner"。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
value_converter |
String |
参数解释: 值转换器,默认为"org.apache.kafka.connect.converters.ByteArrayConverter"。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
key_converter |
String |
参数解释: 键转换器,默认为"org.apache.kafka.connect.converters.ByteArrayConverter"。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
kv_delimiter |
String |
参数解释: 键值分隔符,默认为":"。(仅目标端类型为OBS时会显示) 取值范围: 不涉及。 |
请求示例
无
响应示例
状态码:200
查询Smart Connect任务列表成功。
{
"tasks" : [ {
"task_name" : "smart-connect-1571576841",
"topics" : "topic-1643449744",
"source_task" : {
"current_cluster_name" : "A",
"cluster_name" : "B",
"direction" : "pull",
"bootstrap_servers" : "192.168.45.58:9092,192.168.44.1:9092,192.168.41.230:9092,192.168.43.112:9092",
"instance_id" : "59f6d088-****-****-****-********",
"consumer_strategy" : "earliest",
"sync_consumer_offsets_enabled" : false,
"rename_topic_enabled" : false,
"provenance_header_enabled" : false,
"security_protocol" : "PLAINTEXT",
"sasl_mechanism" : "PLAIN",
"user_name" : "",
"topics_mapping" : "",
"compression_type" : "none",
"task_num" : 2,
"replication_factor" : 3
},
"source_type" : "KAFKA_REPLICATOR_SOURCE",
"sink_task" : null,
"sink_type" : "NONE",
"id" : "194917d0-****-****-****-********",
"status" : "RUNNING",
"create_time" : 1708427753133
}, {
"task_name" : "smart-connect-1",
"topics_regex" : "topic-obs*",
"source_task" : null,
"source_type" : "NONE",
"sink_task" : {
"consumer_strategy" : "earliest",
"destination_file_type" : "TEXT",
"obs_bucket_name" : "abcabc",
"obs_path" : "obsTransfer-1810125534",
"partition_format" : "yyyy/MM/dd/HH/mm",
"record_delimiter" : "\\n",
"deliver_time_interva" : 300,
"obs_part_size" : 5242880,
"flush_size" : 1000000,
"timezone" : "Asia/Chongqing",
"schema_generator_class" : "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"partitioner_class" : "io.confluent.connect.storage.partitioner.TimeBasedPartitioner",
"value_converter" : "org.apache.kafka.connect.converters.ByteArrayConverter",
"key_converter" : "org.apache.kafka.connect.converters.ByteArrayConverter",
"store_keys" : false,
"kv_delimiter" : ":"
},
"sink_type" : "OBS_SINK",
"id" : "3c0ac4d1-****-****-****-********",
"status" : "RUNNING",
"create_time" : 1708565483911
} ],
"total_number" : 2,
"max_tasks" : 18,
"quota_tasks" : 18
}
SDK代码示例
SDK代码示例如下。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
package com.huaweicloud.sdk.test;
import com.huaweicloud.sdk.core.auth.ICredential;
import com.huaweicloud.sdk.core.auth.BasicCredentials;
import com.huaweicloud.sdk.core.exception.ConnectionException;
import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
import com.huaweicloud.sdk.core.exception.ServiceResponseException;
import com.huaweicloud.sdk.kafka.v2.region.KafkaRegion;
import com.huaweicloud.sdk.kafka.v2.*;
import com.huaweicloud.sdk.kafka.v2.model.*;
public class ListConnectorTasksSolution {
public static void main(String[] args) {
// The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
// In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
String ak = System.getenv("CLOUD_SDK_AK");
String sk = System.getenv("CLOUD_SDK_SK");
String projectId = "{project_id}";
ICredential auth = new BasicCredentials()
.withProjectId(projectId)
.withAk(ak)
.withSk(sk);
KafkaClient client = KafkaClient.newBuilder()
.withCredential(auth)
.withRegion(KafkaRegion.valueOf("<YOUR REGION>"))
.build();
ListConnectorTasksRequest request = new ListConnectorTasksRequest();
request.withInstanceId("{instance_id}");
try {
ListConnectorTasksResponse response = client.listConnectorTasks(request);
System.out.println(response.toString());
} catch (ConnectionException e) {
e.printStackTrace();
} catch (RequestTimeoutException e) {
e.printStackTrace();
} catch (ServiceResponseException e) {
e.printStackTrace();
System.out.println(e.getHttpStatusCode());
System.out.println(e.getRequestId());
System.out.println(e.getErrorCode());
System.out.println(e.getErrorMsg());
}
}
}
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
# coding: utf-8
import os
from huaweicloudsdkcore.auth.credentials import BasicCredentials
from huaweicloudsdkkafka.v2.region.kafka_region import KafkaRegion
from huaweicloudsdkcore.exceptions import exceptions
from huaweicloudsdkkafka.v2 import *
if __name__ == "__main__":
# The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
# In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
ak = os.environ["CLOUD_SDK_AK"]
sk = os.environ["CLOUD_SDK_SK"]
projectId = "{project_id}"
credentials = BasicCredentials(ak, sk, projectId)
client = KafkaClient.new_builder() \
.with_credentials(credentials) \
.with_region(KafkaRegion.value_of("<YOUR REGION>")) \
.build()
try:
request = ListConnectorTasksRequest()
request.instance_id = "{instance_id}"
response = client.list_connector_tasks(request)
print(response)
except exceptions.ClientRequestException as e:
print(e.status_code)
print(e.request_id)
print(e.error_code)
print(e.error_msg)
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
package main
import (
"fmt"
"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
kafka "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2"
"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/model"
region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/region"
)
func main() {
// The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
// In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
ak := os.Getenv("CLOUD_SDK_AK")
sk := os.Getenv("CLOUD_SDK_SK")
projectId := "{project_id}"
auth := basic.NewCredentialsBuilder().
WithAk(ak).
WithSk(sk).
WithProjectId(projectId).
Build()
client := kafka.NewKafkaClient(
kafka.KafkaClientBuilder().
WithRegion(region.ValueOf("<YOUR REGION>")).
WithCredential(auth).
Build())
request := &model.ListConnectorTasksRequest{}
request.InstanceId = "{instance_id}"
response, err := client.ListConnectorTasks(request)
if err == nil {
fmt.Printf("%+v\n", response)
} else {
fmt.Println(err)
}
}
|
更多编程语言的SDK代码示例,请参见API Explorer的代码示例页签,可生成自动对应的SDK代码示例。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
package com.huaweicloud.sdk.test;
import com.huaweicloud.sdk.core.auth.ICredential;
import com.huaweicloud.sdk.core.auth.BasicCredentials;
import com.huaweicloud.sdk.core.exception.ConnectionException;
import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
import com.huaweicloud.sdk.core.exception.ServiceResponseException;
import com.huaweicloud.sdk.kafka.v2.region.KafkaRegion;
import com.huaweicloud.sdk.kafka.v2.*;
import com.huaweicloud.sdk.kafka.v2.model.*;
public class ListConnectorTasksSolution {
public static void main(String[] args) {
// The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
// In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
String ak = System.getenv("CLOUD_SDK_AK");
String sk = System.getenv("CLOUD_SDK_SK");
String projectId = "{project_id}";
ICredential auth = new BasicCredentials()
.withProjectId(projectId)
.withAk(ak)
.withSk(sk);
KafkaClient client = KafkaClient.newBuilder()
.withCredential(auth)
.withRegion(KafkaRegion.valueOf("<YOUR REGION>"))
.build();
ListConnectorTasksRequest request = new ListConnectorTasksRequest();
request.withInstanceId("{instance_id}");
try {
ListConnectorTasksResponse response = client.listConnectorTasks(request);
System.out.println(response.toString());
} catch (ConnectionException e) {
e.printStackTrace();
} catch (RequestTimeoutException e) {
e.printStackTrace();
} catch (ServiceResponseException e) {
e.printStackTrace();
System.out.println(e.getHttpStatusCode());
System.out.println(e.getRequestId());
System.out.println(e.getErrorCode());
System.out.println(e.getErrorMsg());
}
}
}
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
# coding: utf-8
import os
from huaweicloudsdkcore.auth.credentials import BasicCredentials
from huaweicloudsdkkafka.v2.region.kafka_region import KafkaRegion
from huaweicloudsdkcore.exceptions import exceptions
from huaweicloudsdkkafka.v2 import *
if __name__ == "__main__":
# The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
# In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
ak = os.environ["CLOUD_SDK_AK"]
sk = os.environ["CLOUD_SDK_SK"]
projectId = "{project_id}"
credentials = BasicCredentials(ak, sk, projectId)
client = KafkaClient.new_builder() \
.with_credentials(credentials) \
.with_region(KafkaRegion.value_of("<YOUR REGION>")) \
.build()
try:
request = ListConnectorTasksRequest()
request.instance_id = "{instance_id}"
response = client.list_connector_tasks(request)
print(response)
except exceptions.ClientRequestException as e:
print(e.status_code)
print(e.request_id)
print(e.error_code)
print(e.error_msg)
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
package main
import (
"fmt"
"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
kafka "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2"
"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/model"
region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/region"
)
func main() {
// The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
// In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
ak := os.Getenv("CLOUD_SDK_AK")
sk := os.Getenv("CLOUD_SDK_SK")
projectId := "{project_id}"
auth := basic.NewCredentialsBuilder().
WithAk(ak).
WithSk(sk).
WithProjectId(projectId).
Build()
client := kafka.NewKafkaClient(
kafka.KafkaClientBuilder().
WithRegion(region.ValueOf("<YOUR REGION>")).
WithCredential(auth).
Build())
request := &model.ListConnectorTasksRequest{}
request.InstanceId = "{instance_id}"
response, err := client.ListConnectorTasks(request)
if err == nil {
fmt.Printf("%+v\n", response)
} else {
fmt.Println(err)
}
}
|
更多编程语言的SDK代码示例,请参见API Explorer的代码示例页签,可生成自动对应的SDK代码示例。
状态码
状态码 |
描述 |
---|---|
200 |
查询Smart Connect任务列表成功。 |
错误码
请参见错误码。