Querying Smart Connect Task Details
Function
This API is used to query Smart Connect task details.
Calling Method
For details, see Calling APIs.
URI
GET /v2/{project_id}/instances/{instance_id}/connector/tasks/{task_id}
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
project_id |
Yes |
String |
Project ID. For details, see Obtaining a Project ID. |
instance_id |
Yes |
String |
Instance ID. |
task_id |
Yes |
String |
Smart Connect task ID. |
Request Parameters
None
Response Parameters
Status code: 200
Parameter |
Type |
Description |
---|---|---|
task_name |
String |
Smart Connect task name. |
topics |
String |
Topic of a Smart Connect task. |
topics_regex |
String |
Regular expression of the topic of a Smart Connect task. |
source_type |
String |
Source type of a Smart Connect task. |
source_task |
Source configuration of a Smart Connect task. |
|
sink_type |
String |
Target type of a Smart Connect task. |
sink_task |
Target type of a Smart Connect task. |
|
id |
String |
ID of a Smart Connect task. |
status |
String |
Smart Connect task status. |
create_time |
Long |
Time when the Smart Connect task was created. |
Parameter |
Type |
Description |
---|---|---|
redis_address |
String |
Redis instance address. (Displayed only when the source type is Redis.) |
redis_type |
String |
Redis instance type. (Displayed only when the source type is Redis.) |
dcs_instance_id |
String |
DCS instance ID. (Displayed only when the source type is Redis.) |
sync_mode |
String |
Synchronization type: RDB_ONLY indicates full synchronization; CUSTOM_OFFSET indicates full and incremental synchronization. (Displayed only when the source type is Redis.) |
full_sync_wait_ms |
Integer |
Interval of full synchronization retries, in ms. (Displayed only when the source type is Redis.) |
full_sync_max_retry |
Integer |
Max. retries of full synchronization. (Displayed only when the source type is Redis.) |
ratelimit |
Integer |
Rate limit, in KB/s. -1: disable. (Displayed only when the source type is Redis.) |
current_cluster_name |
String |
Current Kafka instance name. (Displayed only when the source type is Kafka.) |
cluster_name |
String |
Target Kafka instance name. (Displayed only when the source type is Kafka.) |
user_name |
String |
Username of the target Kafka instance. (Displayed only when the source type is Kafka.) |
sasl_mechanism |
String |
Target Kafka authentication mode. (Displayed only when the source type is Kafka.) |
instance_id |
String |
Target Kafka instance ID. (Displayed only when the source type is Kafka.) |
bootstrap_servers |
String |
Target Kafka instance address. (Displayed only when the source type is Kafka.) |
security_protocol |
String |
Target Kafka authentication. (Displayed only when the source type is Kafka.) |
direction |
String |
Sync direction. (Displayed only when the source type is Kafka.) |
sync_consumer_offsets_enabled |
Boolean |
Indicates whether to sync the consumption progress. (Displayed only when the source type is Kafka.) |
replication_factor |
Integer |
Number of replicas. (Displayed only when the source type is Kafka.) |
task_num |
Integer |
Number of tasks. (Displayed only when the source type is Kafka.) |
rename_topic_enabled |
Boolean |
Indicates whether to rename a topic. (Displayed only when the source type is Kafka.) |
provenance_header_enabled |
Boolean |
Indicates whether to add the source header. (Displayed only when the source type is Kafka.) |
consumer_strategy |
String |
Start offset. latest: Obtain the latest data; earliest: Obtain the earliest data. (Displayed only when the source type is Kafka.) |
compression_type |
String |
Compression algorithm. (Displayed only when the source type is Kafka.) |
topics_mapping |
String |
Topic mapping. (Displayed only when the source type is Kafka.) |
Parameter |
Type |
Description |
---|---|---|
redis_address |
String |
Redis instance address. (Displayed only when the target type is Redis.) |
redis_type |
String |
Redis instance type. (Displayed only when the target type is Redis.) |
dcs_instance_id |
String |
DCS instance ID. (Displayed only when the target type is Redis.) |
target_db |
Integer |
Target database. The default value is -1. (Displayed only when the target type is Redis.) |
consumer_strategy |
String |
Start offset. latest: Obtain the latest data; earliest: Obtain the earliest data. (Displayed only when the target type is OBS.) |
destination_file_type |
String |
Dump file format. Only TEXT is supported. (Displayed only when the target type is OBS.) |
deliver_time_interval |
Integer |
Dumping period (s). (Displayed only when the target type is OBS.) |
obs_bucket_name |
String |
Dumping address. (Displayed only when the target type is OBS.) |
obs_path |
String |
Dump directory. (Displayed only when the target type is OBS.) |
partition_format |
String |
Time directory format. (Displayed only when the target type is OBS.) |
record_delimiter |
String |
Line break. (Displayed only when the target type is OBS.) |
store_keys |
Boolean |
Storage key. (Displayed only when the target type is OBS.) |
obs_part_size |
Integer |
Size (in bytes) of each file to be uploaded. The default value is 5242880. (Displayed only when the target type is OBS.) |
flush_size |
Integer |
flush_size. (Displayed only when the target type is OBS.) |
timezone |
String |
Time zone. (Displayed only when the target type is OBS.) |
schema_generator_class |
String |
schema_generator class. The default value is io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator. (Displayed only when the target type is OBS.) |
partitioner_class |
String |
partitioner class. The default value is io.confluent.connect.storage.partitioner.TimeBasedPartitioner. (Displayed only when the target type is OBS.) |
value_converter |
String |
value_converter. The default value is org.apache.kafka.connect.converters.ByteArrayConverter. (Displayed only when the target type is OBS.) |
key_converter |
String |
key_converter. The default value is org.apache.kafka.connect.converters.ByteArrayConverter. (Displayed only when the target type is OBS.) |
kv_delimiter |
String |
kv_delimiter. The default value is :. (Displayed only when the target type is OBS.) |
Example Requests
None
Example Responses
Status code: 200
Successful.
{ "task_name" : "smart-connect-121248117", "topics" : "topic-sc", "source_task" : { "redis_address" : "192.168.91.179:6379", "redis_type" : "standalone", "dcs_instance_id" : "949190a2-598a-4afd-99a8-dad3cae1e7cd", "sync_mode" : "RDB_ONLY,", "full_sync_wait_ms" : 13000, "full_sync_max_retry" : 4, "ratelimit" : -1 }, "source_type" : "REDIS_REPLICATOR_SOURCE", "sink_task" : { "redis_address" : "192.168.119.51:6379", "redis_type" : "standalone", "dcs_instance_id" : "9b981368-a8e3-416a-87d9-1581a968b41b", "target_db" : -1 }, "sink_type" : "REDIS_REPLICATOR_SINK", "id" : "8a205bbd-7181-4b5e-9bd6-37274ce84577", "status" : "RUNNING", "create_time" : 1708427753133 }
SDK Sample Code
The SDK sample code is as follows.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
package com.huaweicloud.sdk.test; import com.huaweicloud.sdk.core.auth.ICredential; import com.huaweicloud.sdk.core.auth.BasicCredentials; import com.huaweicloud.sdk.core.exception.ConnectionException; import com.huaweicloud.sdk.core.exception.RequestTimeoutException; import com.huaweicloud.sdk.core.exception.ServiceResponseException; import com.huaweicloud.sdk.kafka.v2.region.KafkaRegion; import com.huaweicloud.sdk.kafka.v2.*; import com.huaweicloud.sdk.kafka.v2.model.*; public class ShowConnectorTaskSolution { public static void main(String[] args) { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment String ak = System.getenv("CLOUD_SDK_AK"); String sk = System.getenv("CLOUD_SDK_SK"); String projectId = "{project_id}"; ICredential auth = new BasicCredentials() .withProjectId(projectId) .withAk(ak) .withSk(sk); KafkaClient client = KafkaClient.newBuilder() .withCredential(auth) .withRegion(KafkaRegion.valueOf("<YOUR REGION>")) .build(); ShowConnectorTaskRequest request = new ShowConnectorTaskRequest(); request.withInstanceId("{instance_id}"); request.withTaskId("{task_id}"); try { ShowConnectorTaskResponse response = client.showConnectorTask(request); System.out.println(response.toString()); } catch (ConnectionException e) { e.printStackTrace(); } catch (RequestTimeoutException e) { e.printStackTrace(); } catch (ServiceResponseException e) { e.printStackTrace(); System.out.println(e.getHttpStatusCode()); System.out.println(e.getRequestId()); System.out.println(e.getErrorCode()); System.out.println(e.getErrorMsg()); } } } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
# coding: utf-8 import os from huaweicloudsdkcore.auth.credentials import BasicCredentials from huaweicloudsdkkafka.v2.region.kafka_region import KafkaRegion from huaweicloudsdkcore.exceptions import exceptions from huaweicloudsdkkafka.v2 import * if __name__ == "__main__": # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak = os.environ["CLOUD_SDK_AK"] sk = os.environ["CLOUD_SDK_SK"] projectId = "{project_id}" credentials = BasicCredentials(ak, sk, projectId) client = KafkaClient.new_builder() \ .with_credentials(credentials) \ .with_region(KafkaRegion.value_of("<YOUR REGION>")) \ .build() try: request = ShowConnectorTaskRequest() request.instance_id = "{instance_id}" request.task_id = "{task_id}" response = client.show_connector_task(request) print(response) except exceptions.ClientRequestException as e: print(e.status_code) print(e.request_id) print(e.error_code) print(e.error_msg) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
package main import ( "fmt" "github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic" kafka "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2" "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/model" region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/region" ) func main() { // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security. // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment ak := os.Getenv("CLOUD_SDK_AK") sk := os.Getenv("CLOUD_SDK_SK") projectId := "{project_id}" auth := basic.NewCredentialsBuilder(). WithAk(ak). WithSk(sk). WithProjectId(projectId). Build() client := kafka.NewKafkaClient( kafka.KafkaClientBuilder(). WithRegion(region.ValueOf("<YOUR REGION>")). WithCredential(auth). Build()) request := &model.ShowConnectorTaskRequest{} request.InstanceId = "{instance_id}" request.TaskId = "{task_id}" response, err := client.ShowConnectorTask(request) if err == nil { fmt.Printf("%+v\n", response) } else { fmt.Println(err) } } |
For SDK sample code of more programming languages, see the Sample Code tab in API Explorer. SDK sample code can be automatically generated.
Status Codes
Status Code |
Description |
---|---|
200 |
Successful. |
Error Codes
See Error Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot