网络
虚拟私有云 VPC
弹性公网IP EIP
弹性负载均衡 ELB
NAT网关 NAT
云专线 DC
虚拟专用网络 VPN
云连接 CC
VPC终端节点 VPCEP
企业路由器 ER
企业交换机 ESW
全球加速 GA
企业连接 EC
云原生应用网络 ANC
安全与合规
安全技术与应用
Web应用防火墙 WAF
企业主机安全 HSS
云防火墙 CFW
安全云脑 SecMaster
DDoS防护 AAD
数据加密服务 DEW
数据库安全服务 DBSS
云堡垒机 CBH
数据安全中心 DSC
云证书管理服务 CCM
威胁检测服务 MTD
态势感知 SA
认证测试中心 CTC
边缘安全 EdgeSec
应用中间件
微服务引擎 CSE
分布式消息服务Kafka版
分布式消息服务RabbitMQ版
分布式消息服务RocketMQ版
API网关 APIG
分布式缓存服务 DCS
多活高可用服务 MAS
事件网格 EG
管理与监管
统一身份认证服务 IAM
消息通知服务 SMN
云监控服务 CES
应用运维管理 AOM
应用性能管理 APM
云日志服务 LTS
云审计服务 CTS
标签管理服务 TMS
配置审计 Config
应用身份管理服务 OneAccess
资源访问管理 RAM
组织 Organizations
资源编排服务 RFS
优化顾问 OA
IAM 身份中心
云运维中心 COC
资源治理中心 RGC
解决方案
高性能计算 HPC
SAP
混合云灾备
开天工业工作台 MIW
Haydn解决方案工厂
数字化诊断治理专家服务
云生态
云商店
合作伙伴中心
华为云开发者学堂
华为云慧通差旅
开发与运维
软件开发生产线 CodeArts
需求管理 CodeArts Req
流水线 CodeArts Pipeline
代码检查 CodeArts Check
编译构建 CodeArts Build
部署 CodeArts Deploy
测试计划 CodeArts TestPlan
制品仓库 CodeArts Artifact
移动应用测试 MobileAPPTest
CodeArts IDE Online
开源镜像站 Mirrors
性能测试 CodeArts PerfTest
应用管理与运维平台 ServiceStage
云应用引擎 CAE
开源治理服务 CodeArts Governance
华为云Astro轻应用
CodeArts IDE
Astro工作流 AstroFlow
代码托管 CodeArts Repo
漏洞管理服务 CodeArts Inspector
联接 CodeArtsLink
软件建模 CodeArts Modeling
Astro企业应用 AstroPro
CodeArts 盘古助手
华为云Astro大屏应用
计算
弹性云服务器 ECS
Flexus云服务
裸金属服务器 BMS
云手机服务器 CPH
专属主机 DeH
弹性伸缩 AS
镜像服务 IMS
函数工作流 FunctionGraph
云耀云服务器(旧版)
VR云渲游平台 CVR
Huawei Cloud EulerOS
网络
虚拟私有云 VPC
弹性公网IP EIP
弹性负载均衡 ELB
NAT网关 NAT
云专线 DC
虚拟专用网络 VPN
云连接 CC
VPC终端节点 VPCEP
企业路由器 ER
企业交换机 ESW
全球加速 GA
企业连接 EC
云原生应用网络 ANC
CDN与智能边缘
内容分发网络 CDN
智能边缘云 IEC
智能边缘平台 IEF
CloudPond云服务
安全与合规
安全技术与应用
Web应用防火墙 WAF
企业主机安全 HSS
云防火墙 CFW
安全云脑 SecMaster
DDoS防护 AAD
数据加密服务 DEW
数据库安全服务 DBSS
云堡垒机 CBH
数据安全中心 DSC
云证书管理服务 CCM
威胁检测服务 MTD
态势感知 SA
认证测试中心 CTC
边缘安全 EdgeSec
大数据
MapReduce服务 MRS
数据湖探索 DLI
表格存储服务 CloudTable
可信智能计算服务 TICS
推荐系统 RES
云搜索服务 CSS
数据可视化 DLV
数据接入服务 DIS
数据仓库服务 GaussDB(DWS)
数据治理中心 DataArts Studio
湖仓构建 LakeFormation
智能数据洞察 DataArts Insight
应用中间件
微服务引擎 CSE
分布式消息服务Kafka版
分布式消息服务RabbitMQ版
分布式消息服务RocketMQ版
API网关 APIG
分布式缓存服务 DCS
多活高可用服务 MAS
事件网格 EG
开天aPaaS
应用平台 AppStage
开天企业工作台 MSSE
开天集成工作台 MSSI
API中心 API Hub
云消息服务 KooMessage
交换数据空间 EDS
云地图服务 KooMap
云手机服务 KooPhone
组织成员账号 OrgID
云空间服务 KooDrive
管理与监管
统一身份认证服务 IAM
消息通知服务 SMN
云监控服务 CES
应用运维管理 AOM
应用性能管理 APM
云日志服务 LTS
云审计服务 CTS
标签管理服务 TMS
配置审计 Config
应用身份管理服务 OneAccess
资源访问管理 RAM
组织 Organizations
资源编排服务 RFS
优化顾问 OA
IAM 身份中心
云运维中心 COC
资源治理中心 RGC
区块链
区块链服务 BCS
数字资产链 DAC
华为云区块链引擎服务 HBS
解决方案
高性能计算 HPC
SAP
混合云灾备
开天工业工作台 MIW
Haydn解决方案工厂
数字化诊断治理专家服务
价格
成本优化最佳实践
专属云商业逻辑
云生态
云商店
合作伙伴中心
华为云开发者学堂
华为云慧通差旅
其他
管理控制台
消息中心
产品价格详情
系统权限
客户关联华为云合作伙伴须知
公共问题
宽限期保留期
奖励推广计划
活动
云服务信任体系能力说明
开发与运维
软件开发生产线 CodeArts
需求管理 CodeArts Req
流水线 CodeArts Pipeline
代码检查 CodeArts Check
编译构建 CodeArts Build
部署 CodeArts Deploy
测试计划 CodeArts TestPlan
制品仓库 CodeArts Artifact
移动应用测试 MobileAPPTest
CodeArts IDE Online
开源镜像站 Mirrors
性能测试 CodeArts PerfTest
应用管理与运维平台 ServiceStage
云应用引擎 CAE
开源治理服务 CodeArts Governance
华为云Astro轻应用
CodeArts IDE
Astro工作流 AstroFlow
代码托管 CodeArts Repo
漏洞管理服务 CodeArts Inspector
联接 CodeArtsLink
软件建模 CodeArts Modeling
Astro企业应用 AstroPro
CodeArts 盘古助手
华为云Astro大屏应用
存储
对象存储服务 OBS
云硬盘 EVS
云备份 CBR
高性能弹性文件服务 SFS Turbo
弹性文件服务 SFS
存储容灾服务 SDRS
云硬盘备份 VBS
云服务器备份 CSBS
数据快递服务 DES
云存储网关 CSG
专属分布式存储服务 DSS
数据工坊 DWR
地图数据 MapDS
键值存储服务 KVS
容器
云容器引擎 CCE
云容器实例 CCI
容器镜像服务 SWR
云原生服务中心 OSC
应用服务网格 ASM
华为云UCS
数据库
云数据库 RDS
数据复制服务 DRS
文档数据库服务 DDS
分布式数据库中间件 DDM
云数据库 GaussDB
云数据库 GeminiDB
数据管理服务 DAS
数据库和应用迁移 UGO
云数据库 TaurusDB
人工智能
AI开发平台ModelArts
华为HiLens
图引擎服务 GES
图像识别 Image
文字识别 OCR
自然语言处理 NLP
内容审核 Moderation
图像搜索 ImageSearch
医疗智能体 EIHealth
企业级AI应用开发专业套件 ModelArts Pro
人脸识别服务 FRS
对话机器人服务 CBS
语音交互服务 SIS
人证核身服务 IVS
视频智能分析服务 VIAS
城市智能体
自动驾驶云服务 Octopus
盘古大模型 PanguLargeModels
IoT物联网
设备接入 IoTDA
全球SIM联接 GSL
IoT数据分析 IoTA
路网数字化服务 DRIS
IoT边缘 IoTEdge
设备发放 IoTDP
企业应用
域名注册服务 Domains
云解析服务 DNS
企业门户 EWP
ICP备案
商标注册
华为云WeLink
华为云会议 Meeting
隐私保护通话 PrivateNumber
语音通话 VoiceCall
消息&短信 MSGSMS
云管理网络
SD-WAN 云服务
边缘数据中心管理 EDCM
云桌面 Workspace
应用与数据集成平台 ROMA Connect
ROMA资产中心 ROMA Exchange
API全生命周期管理 ROMA API
政企自服务管理 ESM
视频
实时音视频 SparkRTC
视频直播 Live
视频点播 VOD
媒体处理 MPC
视频接入服务 VIS
数字内容生产线 MetaStudio
迁移
主机迁移服务 SMS
对象存储迁移服务 OMS
云数据迁移 CDM
迁移中心 MGC
专属云
专属计算集群 DCC
开发者工具
SDK开发指南
API签名指南
DevStar
华为云命令行工具服务 KooCLI
Huawei Cloud Toolkit
CodeArts API
云化转型
云架构中心
云采用框架
用户服务
账号中心
费用中心
成本中心
资源中心
企业管理
工单管理
客户运营能力
国际站常见问题
支持计划
专业服务
合作伙伴支持计划
我的凭证
华为云公共事业服务云平台
工业软件
工业数字模型驱动引擎
硬件开发工具链平台云服务
工业数据转换引擎云服务
更新时间:2025-02-06 GMT+08:00
分享

创建Smart Connect任务

功能介绍

创建Smart Connect任务。

调用方法

请参见如何调用API

URI

POST /v2/{project_id}/instances/{instance_id}/connector/tasks

表1 路径参数

参数

是否必选

参数类型

描述

project_id

String

项目ID,获取方式请参见获取项目ID

instance_id

String

实例ID。

请求参数

表2 请求Body参数

参数

是否必选

参数类型

描述

task_name

String

SmartConnect任务名称。

start_later

Boolean

是否稍后再启动任务。如需要创建任务后立即启动,请填false;如希望稍后在任务列表中手动开启任务,请填true。

topics

String

SmartConnect任务配置的Topic。

topics_regex

String

SmartConnect任务配置的Topic正则表达式。

source_type

String

SmartConnect任务的源端类型。

source_task

SmartConnectTaskReqSourceConfig object

SmartConnect任务的源端配置。

sink_type

String

SmartConnect任务的目标端类型。

sink_task

SmartConnectTaskReqSinkConfig object

SmartConnect任务的目标端配置。

表3 SmartConnectTaskReqSourceConfig

参数

是否必选

参数类型

描述

redis_address

String

Redis实例地址。(仅源端类型为Redis时需要填写)

redis_type

String

Redis实例类型。(仅源端类型为Redis时需要填写)

dcs_instance_id

String

DCS实例ID。(仅源端类型为Redis时需要填写)

redis_password

String

Redis密码。(仅源端类型为Redis时需要填写)

sync_mode

String

同步类型,“RDB_ONLY”为全量同步,“CUSTOM_OFFSET”为全量同步+增量同步。(仅源端类型为Redis时需要填写)

full_sync_wait_ms

Integer

全量同步重试间隔时间,单位:毫秒。(仅源端类型为Redis时需要填写)

full_sync_max_retry

Integer

全量同步最大重试次数。(仅源端类型为Redis时需要填写)

ratelimit

Integer

限速,单位为KB/s。-1表示不限速。(仅源端类型为Redis时需要填写)

current_cluster_name

String

当前Kafka实例别名。(仅源端类型为Kafka时需要填写)

cluster_name

String

对端Kafka实例别名。(仅源端类型为Kafka时需要填写)

user_name

String

对端Kafka开启SASL_SSL时设置的用户名,或者创建SASL_SSL用户时设置的用户名。(仅源端类型为Kafka且对端Kafka认证方式为“SASL_SSL”时需要填写)

password

String

对端Kafka开启SASL_SSL时设置的密码,或者创建SASL_SSL用户时设置的密码。(仅源端类型为Kafka且对端Kafka认证方式为“SASL_SSL”时需要填写)

sasl_mechanism

String

对端Kafka认证机制。(仅源端类型为Kafka且“认证方式”为“SASL_SSL”时需要填写)

instance_id

String

对端Kafka实例ID。(仅源端类型为Kafka时需要填写,instance_id和bootstrap_servers仅需要填写其中一个)

bootstrap_servers

String

对端Kafka实例地址。(仅源端类型为Kafka时需要填写,instance_id和bootstrap_servers仅需要填写其中一个)

security_protocol

String

对端Kafka认证方式。(仅源端类型为Kafka需要填写)

支持以下两种认证方式:

  • SASL_SSL:表示实例已开启SASL_SSL。

  • PLAINTEXT:表示实例未开启SASL_SSL。

direction

String

同步方向;pull为把对端Kafka实例数据复制到当前Kafka实例中,push为把当前Kafka实例数据复制到对端Kafka实例中,two-way为对两端Kafka实例数据进行双向复制。(仅源端类型为Kafka时需要填写)

sync_consumer_offsets_enabled

Boolean

是否同步消费进度。(仅源端类型为Kafka时需要填写)

replication_factor

Integer

在对端实例中自动创建Topic时,指定Topic的副本数,此参数值不能超过对端实例的代理数。如果对端实例中设置了“default.replication.factor”,此参数的优先级高于“default.replication.factor”。(仅源端类型为Kafka时需要填写)

task_num

Integer

数据复制的任务数。默认值为2,建议保持默认值。如果“同步方式”为“双向”,实际任务数=设置的任务数*2。(仅源端类型为Kafka时需要填写)

rename_topic_enabled

Boolean

是否重命名Topic,在目标Topic名称前添加源端Kafka实例的别名,形成目标Topic新的名称。(仅源端类型为Kafka时需要填写)

provenance_header_enabled

Boolean

目标Topic接收复制的消息,此消息header中包含消息来源。两端实例数据双向复制时,请开启“添加来源header”,防止循环复制。(仅源端类型为Kafka时需要填写)

consumer_strategy

String

启动偏移量,latest为获取最新的数据,earliest为获取最早的数据。(仅源端类型为Kafka时需要填写)

compression_type

String

复制消息所使用的压缩算法。(仅源端类型为Kafka时需要填写)

  • none

  • gzip

  • snappy

  • lz4

  • zstd

topics_mapping

String

topic映射,用于自定义目标端Topic名称。不能同时设置“重命名Topic”和“topic映射”。topic映射请按照“源端topic:目的端topic”的格式填写,如涉及多个topic映射,请用“,”分隔开,例如:topic-sc-1:topic-sc-2,topic-sc-3:topic-sc-4。(仅源端类型为Kafka时需要填写)

表4 SmartConnectTaskReqSinkConfig

参数

是否必选

参数类型

描述

redis_address

String

Redis实例地址。(仅目标端类型为Redis时需要填写)

redis_type

String

Redis实例类型。(仅目标端类型为Redis时需要填写)

dcs_instance_id

String

DCS实例ID。(仅目标端类型为Redis时需要填写)

redis_password

String

Redis密码。(仅目标端类型为Redis时需要填写)

consumer_strategy

String

转储启动偏移量,latest为获取最新的数据,earliest为获取最早的数据。(仅目标端类型为OBS时需要填写)

destination_file_type

String

转储文件格式。当前只支持TEXT。(仅目标端类型为OBS时需要填写)

deliver_time_interval

Integer

数据转储周期(秒),默认配置为300秒。(仅目标端类型为OBS时需要填写)

access_key

String

AK,访问密钥ID。(仅目标端类型为OBS时需要填写)

secret_key

String

SK,与访问密钥ID结合使用的密钥。(仅目标端类型为OBS时需要填写)

obs_bucket_name

String

转储地址,即存储Topic数据的OBS桶的名称。(仅目标端类型为OBS时需要填写)

obs_path

String

转储目录,即OBS中存储Topic的目录,多级目录可以用“/”进行分隔。(仅目标端类型为OBS时需要填写)

partition_format

String

时间目录格式。(仅目标端类型为OBS时需要填写)

  • yyyy:年

  • yyyy/MM:年/月

  • yyyy/MM/dd:年/月/日

  • yyyy/MM/dd/HH:年/月/日/时

  • yyyy/MM/dd/HH/mm:年/月/日/时/分

record_delimiter

String

记录分行符,用于分隔写入转储文件的用户数据。(仅目标端类型为OBS时需要填写)

取值范围:

  • 逗号“,”

  • 分号“;”

  • 竖线“|”

  • 换行符“\n”

  • NULL

store_keys

Boolean

是否转储Key,开启表示转储Key,关闭表示不转储Key。(仅目标端类型为OBS时需要填写)

响应参数

状态码:200

表5 响应Body参数

参数

参数类型

描述

task_name

String

SmartConnect任务名称。

topics

String

SmartConnect任务配置的Topic。

topics_regex

String

SmartConnect任务配置的Topic正则表达式。

source_type

String

SmartConnect任务的源端类型。

source_task

SmartConnectTaskRespSourceConfig object

SmartConnect任务的源端配置。

sink_type

String

SmartConnect任务的目标端类型。

sink_task

SmartConnectTaskRespSinkConfig object

SmartConnect任务的目标端配置。

id

String

SmartConnect任务的id。

status

String

SmartConnect任务的状态。

create_time

Long

SmartConnect任务的创建时间。

表6 SmartConnectTaskRespSourceConfig

参数

参数类型

描述

redis_address

String

Redis实例地址。(仅源端类型为Redis时会显示)

redis_type

String

Redis实例类型。(仅源端类型为Redis时会显示)

dcs_instance_id

String

DCS实例ID。(仅源端类型为Redis时会显示)

sync_mode

String

同步类型,“RDB_ONLY”为全量同步,“CUSTOM_OFFSET”为全量同步+增量同步。(仅源端类型为Redis时会显示)

full_sync_wait_ms

Integer

全量同步重试间隔时间,单位:毫秒。(仅源端类型为Redis时会显示)

full_sync_max_retry

Integer

全量同步最大重试次数。(仅源端类型为Redis时会显示)

ratelimit

Integer

限速,单位为KB/s。-1表示不限速(仅源端类型为Redis时会显示)

current_cluster_name

String

当前Kafka实例别名。(仅源端类型为Kafka时会显示)

cluster_name

String

对端Kafka实例别名。(仅源端类型为Kafka时会显示)

user_name

String

对端Kafka用户名。(仅源端类型为Kafka时会显示)

sasl_mechanism

String

对端Kafka认证机制。(仅源端类型为Kafka时会显示)

instance_id

String

对端Kafka实例ID。(仅源端类型为Kafka时会显示)

bootstrap_servers

String

对端Kafka实例地址。(仅源端类型为Kafka时会显示)

security_protocol

String

对端Kafka认证方式。(仅源端类型为Kafka时会显示)

direction

String

同步方向。(仅源端类型为Kafka时会显示)

sync_consumer_offsets_enabled

Boolean

是否同步消费进度。(仅源端类型为Kafka时会显示)

replication_factor

Integer

副本数。(仅源端类型为Kafka时会显示)

task_num

Integer

任务数。(仅源端类型为Kafka时会显示)

rename_topic_enabled

Boolean

是否重命名Topic。(仅源端类型为Kafka时会显示)

provenance_header_enabled

Boolean

是否添加来源header。(仅源端类型为Kafka时会显示)

consumer_strategy

String

启动偏移量,latest为获取最新的数据,earliest为获取最早的数据。(仅源端类型为Kafka时会显示)

compression_type

String

压缩算法。(仅源端类型为Kafka时会显示)

topics_mapping

String

topic映射。(仅源端类型为Kafka时会显示)

表7 SmartConnectTaskRespSinkConfig

参数

参数类型

描述

redis_address

String

Redis实例地址。(仅目标端类型为Redis时会显示)

redis_type

String

Redis实例类型。(仅目标端类型为Redis时会显示)

dcs_instance_id

String

DCS实例ID。(仅目标端类型为Redis时会显示)

target_db

Integer

目标数据库,默认为-1。(仅目标端类型为Redis时会显示)

consumer_strategy

String

转储启动偏移量,latest为获取最新的数据,earliest为获取最早的数据。(仅目标端类型为OBS时会显示)

destination_file_type

String

转储文件格式。当前只支持TEXT。(仅目标端类型为OBS时会显示)

deliver_time_interval

Integer

记数据转储周期(秒)。(仅目标端类型为OBS时会显示)

obs_bucket_name

String

转储地址。(仅目标端类型为OBS时会显示)

obs_path

String

转储目录。(仅目标端类型为OBS时会显示)

partition_format

String

时间目录格式。(仅目标端类型为OBS时会显示)

record_delimiter

String

记录分行符。(仅目标端类型为OBS时会显示)

store_keys

Boolean

存储Key。(仅目标端类型为OBS时会显示)

obs_part_size

Integer

每个传输文件多大后就开始上传,单位为byte;默认值5242880。(仅目标端类型为OBS时会显示)

flush_size

Integer

flush_size。(仅目标端类型为OBS时会显示)

timezone

String

时区。(仅目标端类型为OBS时会显示)

schema_generator_class

String

schema_generator类,默认为"io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator"。(仅目标端类型为OBS时会显示)

partitioner_class

String

partitioner类,默认"io.confluent.connect.storage.partitioner.TimeBasedPartitioner"。(仅目标端类型为OBS时会显示)

value_converter

String

value_converter,默认为"org.apache.kafka.connect.converters.ByteArrayConverter"。(仅目标端类型为OBS时会显示)

key_converter

String

key_converter,默认为"org.apache.kafka.connect.converters.ByteArrayConverter"。(仅目标端类型为OBS时会显示)

kv_delimiter

String

kv_delimiter,默认为":"。(仅目标端类型为OBS时会显示)

请求示例

  • 创建一个立即启动的转储任务。

    POST https://{endpoint}/v2/{project_id}/instances/{instance_id}/connector/tasks
    
    {
      "task_name" : "smart-connect-1",
      "start_later" : false,
      "source_type" : "NONE",
      "topics_regex" : "topic-obs*",
      "sink_type" : "OBS_SINK",
      "sink_task" : {
        "consumer_strategy" : "earliest",
        "destination_file_type" : "TEXT",
        "deliver_time_interval" : 300,
        "access_key" : "********",
        "secret_key" : "********",
        "obs_bucket_name" : "obs_bucket",
        "obs_path" : "obsTransfer-1810125534",
        "partition_format" : "yyyy/MM/dd/HH/mm",
        "record_delimiter" : "\\n",
        "store_keys" : false
      }
    }
  • 创建一个稍后启动的Kafka数据复制任务。

    POST https://{endpoint}/v2/{project_id}/instances/{instance_id}/connector/tasks
    
    {
      "task_name" : "smart-connect-2",
      "start_later" : true,
      "source_type" : "KAFKA_REPLICATOR_SOURCE",
      "source_task" : {
        "current_cluster_name" : "A",
        "cluster_name" : "B",
        "user_name" : "user1",
        "password" : "********",
        "sasl_mechanism" : "SCRAM-SHA-512",
        "instance_id" : "b54c9dd8-********-********",
        "direction" : "two-way",
        "sync_consumer_offsets_enabled" : false,
        "replication_factor" : 3,
        "task_num" : 2,
        "rename_topic_enabled" : false,
        "provenance_header_enabled" : true,
        "consumer_strategy" : "latest",
        "compression_type" : "snappy",
        "topics_mapping" : "topic-sc-1:topic-sc-3,topic-sc-2:topic-sc-4"
      }
    }
  • 创建一个立即启动的Redis数据复制任务,同步方式为全量同步,配置全量同步最大重试次数为10,重试间隔时间为10000毫秒,带宽限制为10KB/s。

    POST https://{endpoint}/v2/{project_id}/instances/{instance_id}/connector/tasks
    
    {
      "task_name" : "smart-connect-3",
      "start_later" : false,
      "source_type" : "REDIS_REPLICATOR_SOURCE",
      "source_task" : {
        "redis_address" : "192.168.99.249:6379,192.168.120.127:6379,192.168.116.219:6379",
        "redis_type" : "cluster",
        "redis_password" : "********",
        "sync_mode" : "RDB_ONLY",
        "full_sync_max_retry" : 10,
        "full_sync_wait_ms" : 10000,
        "ratelimit" : 10
      },
      "topics" : "topic-sc-3",
      "sink_type" : "REDIS_REPLICATOR_SINK",
      "sink_task" : {
        "redis_address" : "192.168.119.51:6379",
        "redis_type" : "standalone",
        "redis_password" : "********"
      }
    }

响应示例

状态码:200

创建Smart Connect任务成功。

{
  "task_name" : "smart-connect-121248117",
  "topics" : "topic-sc",
  "source_task" : {
    "redis_address" : "192.168.91.179:6379",
    "redis_type" : "standalone",
    "dcs_instance_id" : "949190a2-598a-4afd-99a8-dad3cae1e7cd",
    "sync_mode" : "RDB_ONLY,",
    "full_sync_wait_ms" : 13000,
    "full_sync_max_retry" : 4,
    "ratelimit" : -1
  },
  "source_type" : "REDIS_REPLICATOR_SOURCE",
  "sink_task" : {
    "redis_address" : "192.168.119.51:6379",
    "redis_type" : "standalone",
    "dcs_instance_id" : "9b981368-a8e3-416a-87d9-1581a968b41b",
    "target_db" : -1
  },
  "sink_type" : "REDIS_REPLICATOR_SINK",
  "id" : "8a205bbd-7181-4b5e-9bd6-37274ce84577",
  "status" : "RUNNING",
  "create_time" : 1708427753133
}

SDK代码示例

SDK代码示例如下。

  • 创建一个立即启动的转储任务。

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    package com.huaweicloud.sdk.test;
    
    import com.huaweicloud.sdk.core.auth.ICredential;
    import com.huaweicloud.sdk.core.auth.BasicCredentials;
    import com.huaweicloud.sdk.core.exception.ConnectionException;
    import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
    import com.huaweicloud.sdk.core.exception.ServiceResponseException;
    import com.huaweicloud.sdk.kafka.v2.region.KafkaRegion;
    import com.huaweicloud.sdk.kafka.v2.*;
    import com.huaweicloud.sdk.kafka.v2.model.*;
    
    
    public class CreateConnectorTaskSolution {
    
        public static void main(String[] args) {
            // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
            // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
            String ak = System.getenv("CLOUD_SDK_AK");
            String sk = System.getenv("CLOUD_SDK_SK");
            String projectId = "{project_id}";
    
            ICredential auth = new BasicCredentials()
                    .withProjectId(projectId)
                    .withAk(ak)
                    .withSk(sk);
    
            KafkaClient client = KafkaClient.newBuilder()
                    .withCredential(auth)
                    .withRegion(KafkaRegion.valueOf("<YOUR REGION>"))
                    .build();
            CreateConnectorTaskRequest request = new CreateConnectorTaskRequest();
            request.withInstanceId("{instance_id}");
            CreateSmartConnectTaskReq body = new CreateSmartConnectTaskReq();
            SmartConnectTaskReqSinkConfig sinkTaskbody = new SmartConnectTaskReqSinkConfig();
            sinkTaskbody.withConsumerStrategy("earliest")
                .withDestinationFileType("TEXT")
                .withDeliverTimeInterval(300)
                .withAccessKey("********")
                .withSecretKey("********")
                .withObsBucketName("obs_bucket")
                .withObsPath("obsTransfer-1810125534")
                .withPartitionFormat("yyyy/MM/dd/HH/mm")
                .withRecordDelimiter("\n")
                .withStoreKeys(false);
            body.withSinkTask(sinkTaskbody);
            body.withSinkType(CreateSmartConnectTaskReq.SinkTypeEnum.fromValue("OBS_SINK"));
            body.withSourceType(CreateSmartConnectTaskReq.SourceTypeEnum.fromValue("NONE"));
            body.withTopicsRegex("topic-obs*");
            body.withStartLater(false);
            body.withTaskName("smart-connect-1");
            request.withBody(body);
            try {
                CreateConnectorTaskResponse response = client.createConnectorTask(request);
                System.out.println(response.toString());
            } catch (ConnectionException e) {
                e.printStackTrace();
            } catch (RequestTimeoutException e) {
                e.printStackTrace();
            } catch (ServiceResponseException e) {
                e.printStackTrace();
                System.out.println(e.getHttpStatusCode());
                System.out.println(e.getRequestId());
                System.out.println(e.getErrorCode());
                System.out.println(e.getErrorMsg());
            }
        }
    }
    
  • 创建一个稍后启动的Kafka数据复制任务。

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    package com.huaweicloud.sdk.test;
    
    import com.huaweicloud.sdk.core.auth.ICredential;
    import com.huaweicloud.sdk.core.auth.BasicCredentials;
    import com.huaweicloud.sdk.core.exception.ConnectionException;
    import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
    import com.huaweicloud.sdk.core.exception.ServiceResponseException;
    import com.huaweicloud.sdk.kafka.v2.region.KafkaRegion;
    import com.huaweicloud.sdk.kafka.v2.*;
    import com.huaweicloud.sdk.kafka.v2.model.*;
    
    
    public class CreateConnectorTaskSolution {
    
        public static void main(String[] args) {
            // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
            // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
            String ak = System.getenv("CLOUD_SDK_AK");
            String sk = System.getenv("CLOUD_SDK_SK");
            String projectId = "{project_id}";
    
            ICredential auth = new BasicCredentials()
                    .withProjectId(projectId)
                    .withAk(ak)
                    .withSk(sk);
    
            KafkaClient client = KafkaClient.newBuilder()
                    .withCredential(auth)
                    .withRegion(KafkaRegion.valueOf("<YOUR REGION>"))
                    .build();
            CreateConnectorTaskRequest request = new CreateConnectorTaskRequest();
            request.withInstanceId("{instance_id}");
            CreateSmartConnectTaskReq body = new CreateSmartConnectTaskReq();
            SmartConnectTaskReqSourceConfig sourceTaskbody = new SmartConnectTaskReqSourceConfig();
            sourceTaskbody.withCurrentClusterName("A")
                .withClusterName("B")
                .withUserName("user1")
                .withPassword("********")
                .withSaslMechanism("SCRAM-SHA-512")
                .withInstanceId("b54c9dd8-********-********")
                .withDirection("two-way")
                .withSyncConsumerOffsetsEnabled(false)
                .withReplicationFactor(3)
                .withTaskNum(2)
                .withRenameTopicEnabled(false)
                .withProvenanceHeaderEnabled(true)
                .withConsumerStrategy("latest")
                .withCompressionType("snappy")
                .withTopicsMapping("topic-sc-1:topic-sc-3,topic-sc-2:topic-sc-4");
            body.withSourceTask(sourceTaskbody);
            body.withSourceType(CreateSmartConnectTaskReq.SourceTypeEnum.fromValue("KAFKA_REPLICATOR_SOURCE"));
            body.withStartLater(true);
            body.withTaskName("smart-connect-2");
            request.withBody(body);
            try {
                CreateConnectorTaskResponse response = client.createConnectorTask(request);
                System.out.println(response.toString());
            } catch (ConnectionException e) {
                e.printStackTrace();
            } catch (RequestTimeoutException e) {
                e.printStackTrace();
            } catch (ServiceResponseException e) {
                e.printStackTrace();
                System.out.println(e.getHttpStatusCode());
                System.out.println(e.getRequestId());
                System.out.println(e.getErrorCode());
                System.out.println(e.getErrorMsg());
            }
        }
    }
    
  • 创建一个立即启动的Redis数据复制任务,同步方式为全量同步,配置全量同步最大重试次数为10,重试间隔时间为10000毫秒,带宽限制为10KB/s。

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    package com.huaweicloud.sdk.test;
    
    import com.huaweicloud.sdk.core.auth.ICredential;
    import com.huaweicloud.sdk.core.auth.BasicCredentials;
    import com.huaweicloud.sdk.core.exception.ConnectionException;
    import com.huaweicloud.sdk.core.exception.RequestTimeoutException;
    import com.huaweicloud.sdk.core.exception.ServiceResponseException;
    import com.huaweicloud.sdk.kafka.v2.region.KafkaRegion;
    import com.huaweicloud.sdk.kafka.v2.*;
    import com.huaweicloud.sdk.kafka.v2.model.*;
    
    
    public class CreateConnectorTaskSolution {
    
        public static void main(String[] args) {
            // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
            // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
            String ak = System.getenv("CLOUD_SDK_AK");
            String sk = System.getenv("CLOUD_SDK_SK");
            String projectId = "{project_id}";
    
            ICredential auth = new BasicCredentials()
                    .withProjectId(projectId)
                    .withAk(ak)
                    .withSk(sk);
    
            KafkaClient client = KafkaClient.newBuilder()
                    .withCredential(auth)
                    .withRegion(KafkaRegion.valueOf("<YOUR REGION>"))
                    .build();
            CreateConnectorTaskRequest request = new CreateConnectorTaskRequest();
            request.withInstanceId("{instance_id}");
            CreateSmartConnectTaskReq body = new CreateSmartConnectTaskReq();
            SmartConnectTaskReqSinkConfig sinkTaskbody = new SmartConnectTaskReqSinkConfig();
            sinkTaskbody.withRedisAddress("192.168.119.51:6379")
                .withRedisType("standalone")
                .withRedisPassword("********");
            SmartConnectTaskReqSourceConfig sourceTaskbody = new SmartConnectTaskReqSourceConfig();
            sourceTaskbody.withRedisAddress("192.168.99.249:6379,192.168.120.127:6379,192.168.116.219:6379")
                .withRedisType("cluster")
                .withRedisPassword("********")
                .withSyncMode("RDB_ONLY")
                .withFullSyncWaitMs(10000)
                .withFullSyncMaxRetry(10)
                .withRatelimit(10);
            body.withSinkTask(sinkTaskbody);
            body.withSinkType(CreateSmartConnectTaskReq.SinkTypeEnum.fromValue("REDIS_REPLICATOR_SINK"));
            body.withSourceTask(sourceTaskbody);
            body.withSourceType(CreateSmartConnectTaskReq.SourceTypeEnum.fromValue("REDIS_REPLICATOR_SOURCE"));
            body.withTopics("topic-sc-3");
            body.withStartLater(false);
            body.withTaskName("smart-connect-3");
            request.withBody(body);
            try {
                CreateConnectorTaskResponse response = client.createConnectorTask(request);
                System.out.println(response.toString());
            } catch (ConnectionException e) {
                e.printStackTrace();
            } catch (RequestTimeoutException e) {
                e.printStackTrace();
            } catch (ServiceResponseException e) {
                e.printStackTrace();
                System.out.println(e.getHttpStatusCode());
                System.out.println(e.getRequestId());
                System.out.println(e.getErrorCode());
                System.out.println(e.getErrorMsg());
            }
        }
    }
    
  • 创建一个立即启动的转储任务。

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    # coding: utf-8
    
    import os
    from huaweicloudsdkcore.auth.credentials import BasicCredentials
    from huaweicloudsdkkafka.v2.region.kafka_region import KafkaRegion
    from huaweicloudsdkcore.exceptions import exceptions
    from huaweicloudsdkkafka.v2 import *
    
    if __name__ == "__main__":
        # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak = os.environ["CLOUD_SDK_AK"]
        sk = os.environ["CLOUD_SDK_SK"]
        projectId = "{project_id}"
    
        credentials = BasicCredentials(ak, sk, projectId)
    
        client = KafkaClient.new_builder() \
            .with_credentials(credentials) \
            .with_region(KafkaRegion.value_of("<YOUR REGION>")) \
            .build()
    
        try:
            request = CreateConnectorTaskRequest()
            request.instance_id = "{instance_id}"
            sinkTaskbody = SmartConnectTaskReqSinkConfig(
                consumer_strategy="earliest",
                destination_file_type="TEXT",
                deliver_time_interval=300,
                access_key="********",
                secret_key="********",
                obs_bucket_name="obs_bucket",
                obs_path="obsTransfer-1810125534",
                partition_format="yyyy/MM/dd/HH/mm",
                record_delimiter="\n",
                store_keys=False
            )
            request.body = CreateSmartConnectTaskReq(
                sink_task=sinkTaskbody,
                sink_type="OBS_SINK",
                source_type="NONE",
                topics_regex="topic-obs*",
                start_later=False,
                task_name="smart-connect-1"
            )
            response = client.create_connector_task(request)
            print(response)
        except exceptions.ClientRequestException as e:
            print(e.status_code)
            print(e.request_id)
            print(e.error_code)
            print(e.error_msg)
    
  • 创建一个稍后启动的Kafka数据复制任务。

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    # coding: utf-8
    
    import os
    from huaweicloudsdkcore.auth.credentials import BasicCredentials
    from huaweicloudsdkkafka.v2.region.kafka_region import KafkaRegion
    from huaweicloudsdkcore.exceptions import exceptions
    from huaweicloudsdkkafka.v2 import *
    
    if __name__ == "__main__":
        # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak = os.environ["CLOUD_SDK_AK"]
        sk = os.environ["CLOUD_SDK_SK"]
        projectId = "{project_id}"
    
        credentials = BasicCredentials(ak, sk, projectId)
    
        client = KafkaClient.new_builder() \
            .with_credentials(credentials) \
            .with_region(KafkaRegion.value_of("<YOUR REGION>")) \
            .build()
    
        try:
            request = CreateConnectorTaskRequest()
            request.instance_id = "{instance_id}"
            sourceTaskbody = SmartConnectTaskReqSourceConfig(
                current_cluster_name="A",
                cluster_name="B",
                user_name="user1",
                password="********",
                sasl_mechanism="SCRAM-SHA-512",
                instance_id="b54c9dd8-********-********",
                direction="two-way",
                sync_consumer_offsets_enabled=False,
                replication_factor=3,
                task_num=2,
                rename_topic_enabled=False,
                provenance_header_enabled=True,
                consumer_strategy="latest",
                compression_type="snappy",
                topics_mapping="topic-sc-1:topic-sc-3,topic-sc-2:topic-sc-4"
            )
            request.body = CreateSmartConnectTaskReq(
                source_task=sourceTaskbody,
                source_type="KAFKA_REPLICATOR_SOURCE",
                start_later=True,
                task_name="smart-connect-2"
            )
            response = client.create_connector_task(request)
            print(response)
        except exceptions.ClientRequestException as e:
            print(e.status_code)
            print(e.request_id)
            print(e.error_code)
            print(e.error_msg)
    
  • 创建一个立即启动的Redis数据复制任务,同步方式为全量同步,配置全量同步最大重试次数为10,重试间隔时间为10000毫秒,带宽限制为10KB/s。

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    # coding: utf-8
    
    import os
    from huaweicloudsdkcore.auth.credentials import BasicCredentials
    from huaweicloudsdkkafka.v2.region.kafka_region import KafkaRegion
    from huaweicloudsdkcore.exceptions import exceptions
    from huaweicloudsdkkafka.v2 import *
    
    if __name__ == "__main__":
        # The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        # In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak = os.environ["CLOUD_SDK_AK"]
        sk = os.environ["CLOUD_SDK_SK"]
        projectId = "{project_id}"
    
        credentials = BasicCredentials(ak, sk, projectId)
    
        client = KafkaClient.new_builder() \
            .with_credentials(credentials) \
            .with_region(KafkaRegion.value_of("<YOUR REGION>")) \
            .build()
    
        try:
            request = CreateConnectorTaskRequest()
            request.instance_id = "{instance_id}"
            sinkTaskbody = SmartConnectTaskReqSinkConfig(
                redis_address="192.168.119.51:6379",
                redis_type="standalone",
                redis_password="********"
            )
            sourceTaskbody = SmartConnectTaskReqSourceConfig(
                redis_address="192.168.99.249:6379,192.168.120.127:6379,192.168.116.219:6379",
                redis_type="cluster",
                redis_password="********",
                sync_mode="RDB_ONLY",
                full_sync_wait_ms=10000,
                full_sync_max_retry=10,
                ratelimit=10
            )
            request.body = CreateSmartConnectTaskReq(
                sink_task=sinkTaskbody,
                sink_type="REDIS_REPLICATOR_SINK",
                source_task=sourceTaskbody,
                source_type="REDIS_REPLICATOR_SOURCE",
                topics="topic-sc-3",
                start_later=False,
                task_name="smart-connect-3"
            )
            response = client.create_connector_task(request)
            print(response)
        except exceptions.ClientRequestException as e:
            print(e.status_code)
            print(e.request_id)
            print(e.error_code)
            print(e.error_msg)
    
  • 创建一个立即启动的转储任务。

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    package main
    
    import (
    	"fmt"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
        kafka "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/model"
        region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/region"
    )
    
    func main() {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak := os.Getenv("CLOUD_SDK_AK")
        sk := os.Getenv("CLOUD_SDK_SK")
        projectId := "{project_id}"
    
        auth := basic.NewCredentialsBuilder().
            WithAk(ak).
            WithSk(sk).
            WithProjectId(projectId).
            Build()
    
        client := kafka.NewKafkaClient(
            kafka.KafkaClientBuilder().
                WithRegion(region.ValueOf("<YOUR REGION>")).
                WithCredential(auth).
                Build())
    
        request := &model.CreateConnectorTaskRequest{}
    	request.InstanceId = "{instance_id}"
    	consumerStrategySinkTask:= "earliest"
    	destinationFileTypeSinkTask:= "TEXT"
    	deliverTimeIntervalSinkTask:= int32(300)
    	accessKeySinkTask:= "********"
    	secretKeySinkTask:= "********"
    	obsBucketNameSinkTask:= "obs_bucket"
    	obsPathSinkTask:= "obsTransfer-1810125534"
    	partitionFormatSinkTask:= "yyyy/MM/dd/HH/mm"
    	recordDelimiterSinkTask:= "\n"
    	storeKeysSinkTask:= false
    	sinkTaskbody := &model.SmartConnectTaskReqSinkConfig{
    		ConsumerStrategy: &consumerStrategySinkTask,
    		DestinationFileType: &destinationFileTypeSinkTask,
    		DeliverTimeInterval: &deliverTimeIntervalSinkTask,
    		AccessKey: &accessKeySinkTask,
    		SecretKey: &secretKeySinkTask,
    		ObsBucketName: &obsBucketNameSinkTask,
    		ObsPath: &obsPathSinkTask,
    		PartitionFormat: &partitionFormatSinkTask,
    		RecordDelimiter: &recordDelimiterSinkTask,
    		StoreKeys: &storeKeysSinkTask,
    	}
    	sinkTypeCreateSmartConnectTaskReq:= model.GetCreateSmartConnectTaskReqSinkTypeEnum().OBS_SINK
    	sourceTypeCreateSmartConnectTaskReq:= model.GetCreateSmartConnectTaskReqSourceTypeEnum().NONE
    	topicsRegexCreateSmartConnectTaskReq:= "topic-obs*"
    	startLaterCreateSmartConnectTaskReq:= false
    	taskNameCreateSmartConnectTaskReq:= "smart-connect-1"
    	request.Body = &model.CreateSmartConnectTaskReq{
    		SinkTask: sinkTaskbody,
    		SinkType: &sinkTypeCreateSmartConnectTaskReq,
    		SourceType: &sourceTypeCreateSmartConnectTaskReq,
    		TopicsRegex: &topicsRegexCreateSmartConnectTaskReq,
    		StartLater: &startLaterCreateSmartConnectTaskReq,
    		TaskName: &taskNameCreateSmartConnectTaskReq,
    	}
    	response, err := client.CreateConnectorTask(request)
    	if err == nil {
            fmt.Printf("%+v\n", response)
        } else {
            fmt.Println(err)
        }
    }
    
  • 创建一个稍后启动的Kafka数据复制任务。

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    package main
    
    import (
    	"fmt"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
        kafka "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/model"
        region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/region"
    )
    
    func main() {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak := os.Getenv("CLOUD_SDK_AK")
        sk := os.Getenv("CLOUD_SDK_SK")
        projectId := "{project_id}"
    
        auth := basic.NewCredentialsBuilder().
            WithAk(ak).
            WithSk(sk).
            WithProjectId(projectId).
            Build()
    
        client := kafka.NewKafkaClient(
            kafka.KafkaClientBuilder().
                WithRegion(region.ValueOf("<YOUR REGION>")).
                WithCredential(auth).
                Build())
    
        request := &model.CreateConnectorTaskRequest{}
    	request.InstanceId = "{instance_id}"
    	currentClusterNameSourceTask:= "A"
    	clusterNameSourceTask:= "B"
    	userNameSourceTask:= "user1"
    	passwordSourceTask:= "********"
    	saslMechanismSourceTask:= "SCRAM-SHA-512"
    	instanceIdSourceTask:= "b54c9dd8-********-********"
    	directionSourceTask:= "two-way"
    	syncConsumerOffsetsEnabledSourceTask:= false
    	replicationFactorSourceTask:= int32(3)
    	taskNumSourceTask:= int32(2)
    	renameTopicEnabledSourceTask:= false
    	provenanceHeaderEnabledSourceTask:= true
    	consumerStrategySourceTask:= "latest"
    	compressionTypeSourceTask:= "snappy"
    	topicsMappingSourceTask:= "topic-sc-1:topic-sc-3,topic-sc-2:topic-sc-4"
    	sourceTaskbody := &model.SmartConnectTaskReqSourceConfig{
    		CurrentClusterName: &currentClusterNameSourceTask,
    		ClusterName: &clusterNameSourceTask,
    		UserName: &userNameSourceTask,
    		Password: &passwordSourceTask,
    		SaslMechanism: &saslMechanismSourceTask,
    		InstanceId: &instanceIdSourceTask,
    		Direction: &directionSourceTask,
    		SyncConsumerOffsetsEnabled: &syncConsumerOffsetsEnabledSourceTask,
    		ReplicationFactor: &replicationFactorSourceTask,
    		TaskNum: &taskNumSourceTask,
    		RenameTopicEnabled: &renameTopicEnabledSourceTask,
    		ProvenanceHeaderEnabled: &provenanceHeaderEnabledSourceTask,
    		ConsumerStrategy: &consumerStrategySourceTask,
    		CompressionType: &compressionTypeSourceTask,
    		TopicsMapping: &topicsMappingSourceTask,
    	}
    	sourceTypeCreateSmartConnectTaskReq:= model.GetCreateSmartConnectTaskReqSourceTypeEnum().KAFKA_REPLICATOR_SOURCE
    	startLaterCreateSmartConnectTaskReq:= true
    	taskNameCreateSmartConnectTaskReq:= "smart-connect-2"
    	request.Body = &model.CreateSmartConnectTaskReq{
    		SourceTask: sourceTaskbody,
    		SourceType: &sourceTypeCreateSmartConnectTaskReq,
    		StartLater: &startLaterCreateSmartConnectTaskReq,
    		TaskName: &taskNameCreateSmartConnectTaskReq,
    	}
    	response, err := client.CreateConnectorTask(request)
    	if err == nil {
            fmt.Printf("%+v\n", response)
        } else {
            fmt.Println(err)
        }
    }
    
  • 创建一个立即启动的Redis数据复制任务,同步方式为全量同步,配置全量同步最大重试次数为10,重试间隔时间为10000毫秒,带宽限制为10KB/s。

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    package main
    
    import (
    	"fmt"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/core/auth/basic"
        kafka "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2"
    	"github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/model"
        region "github.com/huaweicloud/huaweicloud-sdk-go-v3/services/kafka/v2/region"
    )
    
    func main() {
        // The AK and SK used for authentication are hard-coded or stored in plaintext, which has great security risks. It is recommended that the AK and SK be stored in ciphertext in configuration files or environment variables and decrypted during use to ensure security.
        // In this example, AK and SK are stored in environment variables for authentication. Before running this example, set environment variables CLOUD_SDK_AK and CLOUD_SDK_SK in the local environment
        ak := os.Getenv("CLOUD_SDK_AK")
        sk := os.Getenv("CLOUD_SDK_SK")
        projectId := "{project_id}"
    
        auth := basic.NewCredentialsBuilder().
            WithAk(ak).
            WithSk(sk).
            WithProjectId(projectId).
            Build()
    
        client := kafka.NewKafkaClient(
            kafka.KafkaClientBuilder().
                WithRegion(region.ValueOf("<YOUR REGION>")).
                WithCredential(auth).
                Build())
    
        request := &model.CreateConnectorTaskRequest{}
    	request.InstanceId = "{instance_id}"
    	redisAddressSinkTask:= "192.168.119.51:6379"
    	redisTypeSinkTask:= "standalone"
    	redisPasswordSinkTask:= "********"
    	sinkTaskbody := &model.SmartConnectTaskReqSinkConfig{
    		RedisAddress: &redisAddressSinkTask,
    		RedisType: &redisTypeSinkTask,
    		RedisPassword: &redisPasswordSinkTask,
    	}
    	redisAddressSourceTask:= "192.168.99.249:6379,192.168.120.127:6379,192.168.116.219:6379"
    	redisTypeSourceTask:= "cluster"
    	redisPasswordSourceTask:= "********"
    	syncModeSourceTask:= "RDB_ONLY"
    	fullSyncWaitMsSourceTask:= int32(10000)
    	fullSyncMaxRetrySourceTask:= int32(10)
    	ratelimitSourceTask:= int32(10)
    	sourceTaskbody := &model.SmartConnectTaskReqSourceConfig{
    		RedisAddress: &redisAddressSourceTask,
    		RedisType: &redisTypeSourceTask,
    		RedisPassword: &redisPasswordSourceTask,
    		SyncMode: &syncModeSourceTask,
    		FullSyncWaitMs: &fullSyncWaitMsSourceTask,
    		FullSyncMaxRetry: &fullSyncMaxRetrySourceTask,
    		Ratelimit: &ratelimitSourceTask,
    	}
    	sinkTypeCreateSmartConnectTaskReq:= model.GetCreateSmartConnectTaskReqSinkTypeEnum().REDIS_REPLICATOR_SINK
    	sourceTypeCreateSmartConnectTaskReq:= model.GetCreateSmartConnectTaskReqSourceTypeEnum().REDIS_REPLICATOR_SOURCE
    	topicsCreateSmartConnectTaskReq:= "topic-sc-3"
    	startLaterCreateSmartConnectTaskReq:= false
    	taskNameCreateSmartConnectTaskReq:= "smart-connect-3"
    	request.Body = &model.CreateSmartConnectTaskReq{
    		SinkTask: sinkTaskbody,
    		SinkType: &sinkTypeCreateSmartConnectTaskReq,
    		SourceTask: sourceTaskbody,
    		SourceType: &sourceTypeCreateSmartConnectTaskReq,
    		Topics: &topicsCreateSmartConnectTaskReq,
    		StartLater: &startLaterCreateSmartConnectTaskReq,
    		TaskName: &taskNameCreateSmartConnectTaskReq,
    	}
    	response, err := client.CreateConnectorTask(request)
    	if err == nil {
            fmt.Printf("%+v\n", response)
        } else {
            fmt.Println(err)
        }
    }
    

更多编程语言的SDK代码示例,请参见API Explorer的代码示例页签,可生成自动对应的SDK代码示例。

状态码

状态码

描述

200

创建Smart Connect任务成功。

错误码

请参见错误码

相关文档